Cases on Information Technology and Organizational Politics & Culture Mehdi Khosrow-Pour, D.B.A. Editor-in-Chief, Journal of Cases on Information Technology
IDEA GROUP PUBLISHING Hershey • London • Melbourne • Singapore
Acquisitions Editor: Development Editor: Senior Managing Editor: Managing Editor: Typesetter: Cover Design: Printed at:
Michelle Potter Kristin Roth Amanda Appicello Jennifer Neidig Amanda Kirlin Lisa Tosheff Integrated Book Technology
Published in the United States of America by Idea Group Publishing (an imprint of Idea Group Inc.) 701 E. Chocolate Avenue, Suite 200 Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.idea-group.com and in the United Kingdom by Idea Group Publishing (an imprint of Idea Group Inc.) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 0609 Web site: http://www.eurospanonline.com Copyright © 2006 by Idea Group Inc. All rights reserved. No part of this book may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this book are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Cases on information technology and organizational politics & culture / Mehdi KhosrowPour, editor. p. cm. Summary: "This book provides a much needed understanding of how management can deal with the impact of politics and culture on the overall utilization of information technology within an organization"--Provided by publisher. Includes bibliographical references and index. ISBN 1-59904-411-0 (hardcover) -- ISBN 1-59904-412-9 (softcover) -- ISBN 1-59904413-7 (ebook) 1. Information technology--Management--Case studies. 2. Organizational behavior--Case studies. I. Khosrowpour, Mehdi, 1951HD30.2.C379 2006 658.4'038--dc22 2006003567 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. The views expressed in this book are those of the authors, but not necessarily of the publisher.
Cases on Information Technology Series ISSN: 1537-9337 Series Editor Mehdi Khosrow-Pour, D.B.A. Editor-in-Chief, Journal of Cases on Information Technology • • • • • •
• • • • • • • •
• •
•
•
Cases on Database Technologies and Applications Mehdi Khosrow-Pour, Information Resources Management Association, USA Cases on Electronic Commerce Technologies and Applications Mehdi Khosrow-Pour, Information Resources Management Association, USA Cases on Global IT Applications and Management: Success and Pitfalls Felix B. Tan, University of Auckland, New Zealand Cases on Information Technology and Business Process Reengineering Mehdi Khosrow-Pour, Information Resources Management Association, USA Cases on Information Technology and Organizational Politics and Culture Mehdi Khosrow-Pour, Information Resources Management Association, USA Cases on Information Technology Management In Modern Organizations Mehdi Khosrow-Pour, Information Resources Management Association, USA & Jay Liebowitz, George Washington University, USA Cases on Information Technology Planning, Design and Implementation Mehdi Khosrow-Pour, Information Resources Management Association, USA Cases on Information Technology, Volume 7 Mehdi Khosrow-Pour, Information Resources Management Association, USA Cases on Strategic Information Systems Mehdi Khosrow-Pour, Information Resources Management Association, USA Cases on Telecommunications and Networking Mehdi Khosrow-Pour, Information Resources Management Association, USA Cases on the Human Side of Information Technology Mehdi Khosrow-Pour, Information Resources Management Association, USA Cases on Worldwide E-Commerce: Theory in Action Mahesh S. Raisinghani, Texas Woman’s University, USA Case Studies in Knowledge Management Murray E. Jennex, San Diego State University, USA Case Studies on Information Technology in Higher Education: Implications for Policy and Practice Lisa Ann Petrides, Columbia University, USA Success and Pitfalls of IT Management (Annals of Cases in Information Technology, Volume 1) Mehdi Khosrow-Pour, Information Resources Management Association, USA Organizational Achievement and Failure in Information Technology Management (Annals of Cases in Information Technology, Volume 2) Mehdi Khosrow-Pour, Information Resources Management Association, USA Pitfalls and Triumphs of Information Technology Management (Annals of Cases in Information Technology, Volume 3) Mehdi Khosrow-Pour, Information Resources Management Association, USA Annals of Cases in Information Technology, Volume 4 - 6 Mehdi Khosrow-Pour, Information Resources Management Association, USA
Cases on Information Technology and Organizational Politics & Culture Detailed Table of Contents
Chapter I USCInfo: A High Volume, Integrated Online Library Resources Automation Project ........................................................................................................1 Mathew J. Klempa, Computer Information Systems Consultant, USA Lucy Siefert Wegner, University of Southern California, USA This case describes automation philosophies and systems development processes associated with the University of Southern California’s “USCInfo”, an integrated retrieval software for accessing both the USC Library catalog and periodical indexes. Chapter II A Case Study of IT Chargeback in a Government Agency ......................................... 27 Dana Edberg, University of Nevada, Reno, USA William L. Kuechler, Jr., University of Nevada, Reno, USA This case study describes the implementation of an IT chargeback system within Nevada’s Department of Public Safety. The development and deployment of the system is explained within the context of the legislative mandates for the Department. Both the benefits and the substantial number of problems introduced by the system are identified.
Chapter III The Politics of Information Management ................................................................... 45 Lisa Petrides, Teachers College, Columbia University, USA Sharon Khanuja-Dhall, Teachers College, Columbia University, USA Pablo Reguerin, Teachers College, Columbia University, USA This case details one institution’s attempts, at a departmental level, to develop an information system for planning and decision making. It examines the challenges faced in managing information and the behaviors that drive new information management processes with the increased use of technology. Chapter IV A DSS Model that Aligns Business Strategy and Business Structure with Advanced Information Technology: A Case Study ...................................................... 56 Petros Theodorou, Technological Education Institution of Kauala, Greece This case discusses the development of an alignment model utilizing strategy, structure, and information technology after a CIO in the 1990s suggested to the board of directors the need for an investment program on a barcode system at the front office. The purpose was to decrease entry time of cashier and waiting time of customer in order to increase competitive advantage. Chapter V The Columbia Disaster: Culture, Communication & Change ................................... 76 Ruth Guthrie, California Polytechnic University, Pomona, USA Conrad Shayo, California State University, San Bernardino, USA This case discusses the disintegration of Space Shuttle Columbia as it returned to earth. This event occurred 16 years after the Space Shuttle Challenger exploded during take-off. As information was collected, investigators found that many of the problems uncovered during the Challenger investigation were also factors for Columbia. Underlying both disasters was the problem of relaying complex engineering information to management, in an environment driven by schedule and budget pressure. Once again, NASA is looking at ways to better manage space programs in an environment of limited resources. Chapter VI AMERIREAL Corporation: Information Technology and Organizational Performance ................................................................................................................ 98 Mo Adam Mahmood, University of Texas at El Paso, USA Gary J. Mann, University of Texas at El Paso, USA Mark Dubrow, University of Texas at El Paso, USA This instructional case, based on an actual firm’s experience (name changed) is intended to challenge student thinking with regard to the extent to which IT can de-
monstrably contribute to organizational performance and productivity, and to which users of IT can relate their investment decisions to measurable outcomes. Chapter VII The Australasian Food Products Co-Op: A Global Information Systems Endeavour .................................................................................................................. 110 Hans Lehmann, University of Auckland, New Zealand This case is based on the history of an IIS project in a real transnational enterprise. It describes an “information system migration” following the development of a global business strategy of a multi-national enterprise through various stages. Chapter VIII Better Army Housing Management Through Information Technology ................... 136 Guisseppi A. Forgionne, University of Maryland, Baltimore County, USA This case describes how the Department of the Army must provide its personnel with acceptable housing at minimum cost within the vicinity of military installations. A decision technology system called the Housing Analysis Decision Technology System (HADTS) has been developed to support the construction or leasing management process. Chapter IX Success in Business-to-Business E-Commerce: Cisco New Zealand’s Experience ................................................................................................................. 149 Pauline Ratnasingam, University of Vermont, USA This case study about Cisco New Zealand permits students to learn about the factors that influence the successful trading partner relationships in business-to-business e-commerce participation, among other pertinent issues. Chapter X Call to Action: Developing a Support Plan for a New Product .................................. 164 William S. Lightfoot, International University of Monaco, Monaco This case study is about a call to action for a product management team to rapidly improve the technical support for a new product line. The CEO has expressed serious concerns about the team’s ability to perform. A comprehensive review of the current situation, and the development of a new support process is described.
Chapter XI Success and Failure in Building Electronic Infrastructures in the Air Cargo Industry: A Comparison of The Netherlands and Hong Kong SAR ......................... 176 Ellen Christiaanse, University of Amsterdam, The Netherlands Jan Damsgaard, Aalborg University, Denmark This case describes the genesis and evolution of two IOSs in the air cargo community and provides information that let students analyze what led one to be a success and one to be a failure. The two cases are from the Netherlands and Hong Kong SAR. Chapter XII Corporate Collapse and IT Governance within the Australian Airlines Industry ..................................................................................................................... 187 Simpson Poon, Charles Sturt University, Australia Catherine Hardy, Charles Sturt University, Australia Peter Adams, Charles Sturt University, Australia The Kendell case illustrates issues of IT governance in the airlines industry. The case is written using the corporate collapse of Kendell’s parent company Ansett Australia as the background. The case also explores the potential issues facing the reestablished company now called Regional Express (REX) and how to implement the lessons learned. Chapter XIII Social Construction of Information Technology Supporting Work ......................... 203 Isabel Ramos, Universidade do Minho, Portugal Daniel M. Berry, University of Waterloo, Canada This case describes the debate of a CIO of a Portuguese company in the automobile industry: whether to abandon or to continue supporting the MIS his company had been using for years. This MIS had been supporting the company’s production processes and the procurement of resources for these processes. However, in spite of the fact that the MIS system had been deployed under the CIO’s tight control, the CIO felt strong opposition to the use of this MIS system, opposition that was preventing the MIS system from being used to its full potential. Chapter XIV Cross-Cultural Implementation of Information System ........................................... 220 Wai K. Law, University of Guam, Guam Karri Perez, University of Guam, Guam GHI, an international service conglomerate, recently acquired a new subsidiary in an Asian country. A new information system was planned to facilitate the re-branding of the subsidiary. The project was outsourced to an application service provider through a consultant. A depressed local management team, with a depleted technology budget,
must reinvent all operating procedures dependent on the new information system, as presented in this case. Chapter XV DSS for Strategic Decision Making ......................................................................... 230 Sherif Kamel, American University in Cairo, Egypt This case describes and analyzes the experience of the Egyptian government in spreading the awareness of information technology and its use in managing socioeconomic development through building multiple information handling and decision support systems in messy, turbulent, and changing environments. Chapter XVI Emergency: Implementing an Ambulance Despatch System ................................... 247 Darren Dalcher, Middlesex University, UK This case study charts the story of the problematic implementation of a computerised despatch system for the Metropolitan Ambulance Service (MAS) in Melbourne, Australia. The system itself is now operational; however, the legal and political implications are still subject to the deliberations of inquiry boards, and police investigations. The value of the case study is in highlighting some of the pitfalls and implications of failing to consider the financial pressures and resource constraints that define the (medical) despatch environment. Chapter XVII Seaboard Stock Exchange’s Emergency E-Commerce Initiative .............................. 264 Linda V. Knight, DePaul University, USA Theresa A. Steinbach, DePaul University, USA Diane M. Graf, Northern Illinois University, USA The case study describes a traditional organization striving to integrate Internetbased technologies. As internal entrepreneurs develop new e-commerce systems, the organization struggles to incorporate new strategies, define new system development methodologies, and find an appropriate role for standards and controls in an emerging technology environment. Chapter XVIII Ford Mondeo: A Model T World Car? ...................................................................... 281 Michael J. Mol, Erasmus University Rotterdam, The Netherlands This case weighs the advantages and disadvantages of going global. Ford presented its 1993 Mondeo model, sold as Mystique and Contour in North America, as a “world car”. It tried to build a single model for all markets globally to optimize scale of production. This required strong involvement from suppliers and heavy usage of new
information technology. The case discusses the difficulties that needed to be overcome as well as the gains that Ford expected from the project. Chapter XIX Network Implementation Project in the State Sector in Scotland: The Influence of Social and Organizational Factors ................................................ 298 Ann McCready, Glasgow Caledonian University, Scotland Andrew Doswell, Glasgow Caledonian University, Scotland This case study is about the introduction of networked PCs in a local government office in Perth, Scotland. The case focuses on the importance of organizational and social factors during the implementation process. Chapter XX Implementation of a Network Print Management System: Lessons Learned .......... 311 George Kelley, Morehead State University, USA Elizabeth A. Regan, Morehead State University, USA C. Steven Hunt, Morehead State University, USA This case provides an interesting study from a number of perspectives on the implementation of a Network Print Management System (NPMS) in a complex end-user environment. It examines issues related to technology, systems analysis and design, selection and management of outsourced services, installation, testing, troubleshooting, and end-user participation and acceptance. Chapter XXI Managing the NICS Project at the Royal Canadian University ................................ 330 Charalambos L. Iacovou, Georgetown University, USA This case describes the installation of an IBM mainframe computer at the Royal Canadian University. The goal of the described project was to establish a Numerically Intensive Computing Service (NICS) in order to provide “first-class” computing facilities to the researchers. Due to a number of factors, NICS failed to meet its objectives and the university abandoned the project within the first two years of its operations. Chapter XXII Improving PC Services at Oshkosh Truck Corporation .......................................... 345 Jakob Holden Iversen, University of Wisconsin Oshkosh, USA Michael A. Eierman, University of Wisconsin Oshkosh, USA George C. Philip, University of Wisconsin Oshkosh, USA This case presents the problems encountered at Oshkosh Truck’s IT Call Center and PC Services, relating to improving productivity and user satisfaction of the IT Department. The case presents the process of handling user problems, all the way from
when it is received at the helpdesk until a technician resolves the issue at the user’s desk, how well the process works, and some problems associated with the process. Chapter XXIII LXS Ltd. Meets Tight System Development Deadlines via the St. Lucia Connection ................................................................................................................ 368 Geoffrey S. Howard, Kent State University, USA This case describes how a LXS Ltd., Toronto-based software house meets tight systems development deadline for their new product called Estitherm, a Web-based software tool that supports heat loss calculations for architectural engineers designing structures. Chapter XXIV IT in Improvement of Public Administration ............................................................ 388 Jerzy Kisielnicki, Warsaw University, Poland The case study describes the process of implementation of IT in improvement of public administration in Bialystok (Poland). The city of Bialystok has 280 thousand inhabitants. The new management system has been based on new IT solutions, including extranet network and integrated database. The result of implementation of IT was a reduction of the decision taking time by an average of 30% and the reduction of the routine affairs handling time by an average of 25%. Chapter XXV Library Networking of the Universidad de Oriente: A Case Study of Introduction of Information Technology ................................................................... 399 Abul K. Bashirullah, Universidad de Oriente, Venezuela The Universidad de Oriente is located in the south northeastern region of Venezuela. To introduce new information technologies to the libraries and all laboratories of the university, the intranet of the university — with 32 networking systems — was introduced for all campuses with the technology of Main Frame Relay. The challenging job is to create consciousness about information literacy. This case describes the creation of university digital databases and digitalization of valuable documents.
About the Editor ......................................................................................................... 406
Index ........................................................................................................................ 407
xi
Preface As information technology applications are incorporated into all functioning aspects of organizations of all types and size, more managers need to address concerns regarding the impact of these technologies on organizational politics and culture. The need to understand how the human side of technology affects the overall management of changes and power structures combined with organizational culture inside any business becomes extremely crucial. Cases on Information Technology and Organizational Politics and Culture, part of Idea Group Inc.’s Cases on Information Technology Series, documents real-life cases describing issues, challenges, and solutions related to information technology and how it affects organizational politics and culture. The cases included in this volume cover a wide variety of topics, such as an integrated online library resources automation project; IT within a government agency; the politics of information management; the alignment of business strategy with business structure, culture, communication, and change within NASA; IT and organizational performance within corporations; information systems endeavor within a food products co-op; the improvement of army housing management through information technology; small business-to-business e-commerce; new producer support plan development; electronic infrastructures in the air cargo industry; IT governance within the airlines industry; information technology supporting work within the automobile industry; the cross-cultural implementation of an information system; decision support systems for government; the implementation of an ambulance despatch system; the ecommerce initiative of a stock exchange; the decision of companies to “go global”; the introduction of networked PCs in a local government office; the implementation of a network print management system; the development of a system to improve computer facilities for researchers; identification of and solutions to problems with an IT call and services center; the connection of software buyers and sellers through systems development; public administration improvement through information technology; and university library networking. While organizations worldwide have benefited from new innovations in information technology, they have also discovered that in order to increase the overall effective utilization of these technologies in their respective organizations, they need to learn more about the human and organizational impacts of these technologies. Cases included in Cases on Information Technology and Organizational Politics and Culture will provide a much needed understanding of how management can deal with the
xii
impact of politics and culture on the overall utilization of information technology within an organization. Lessons learned from these cases will be very instrumental in attaining a better understanding of the issues and challenges involved in managing information technology and impact on organizational politics and cultures. Note to Professors: Teaching notes for cases included in this publication are available to those professors who decide to adopt the book for their college course. Contact
[email protected] for additional information regarding teaching notes and to learn about other volumes of case books in the IGI Cases on Information Technology Series.
ACKNOWLEDGMENTS Putting together a publication of this magnitude requires the cooperation and assistance of many professionals with much expertise. I would like to take this opportunity to express my gratitude to all the authors of cases included in this volume. Many thanks also to all the editorial assistance provided by the Idea Group Inc. editors during the development of these books, particularly all the valuable and timely efforts of Mr. Andrew Bundy and Ms. Michelle Potter. Finally, I would like to dedicate this book to all my colleagues and former students who taught me a lot during my years in academia. A special thank you to the Editorial Advisory Board: Annie Becker, Florida Institute of Technology, USA; Stephen Burgess, Victoria University, Australia; Juergen Seitz, University of Cooperative Education, Germany; Subhasish Dasgupta, George Washington University, USA; and Barbara Klein, University of Michigan, Dearborn, USA. Mehdi Khosrow-Pour, D.B.A. Editor-in-Chief Cases on Information Technology Series http://www.idea-group.com/bookseries/details.asp?id=18
USCInfo 1
Chapter I
USCInfo:
A High Volume, Integrated Online Library Resources Automation Project Mathew J. Klempa Computer Information Systems Consultant, USA Lucy Siefert Wegner University of Southern California, USA
EXECUTIVE SUMMARY
This case sets forth automation philosophies and systems development processes associated with the University of Southern California’s “USCInfo”,1 an integrated retrieval software for accessing both the USC Library catalog and periodical indexes. Regarded at its implementation as being cutting edge in library automation, USCInfo’s present size is 25 gigabytes of data, with searches numbering 3,800,000 annually. USCInfo is illustrative of “messy problems”, that is, unstructured, complex, and multidimensional, which typically involve substantive organizational issues “soft” in nature. Problem conceptualization, decision making, and solution implementations in USCInfo often are both heuristic2 and utilize satisficing3 decision-making processes. Such decision making deals with multiple, substantive constraints as well as conflict and ambiguity, that is, “equivocality”.4 Systems development concepts embodied in this case include: • • • •
Systems life cycle evolution amidst technology change and obsolescence Systems design alternatives and end user characteristics Management of the systems life cycle maintenance phase Management of applications prototyping
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
2 Klempa & Wegner
• • • •
Small design team dynamics, champions Organizational impacts on the systems life cycle Rational and political organization processes Dual responsibility project management
BACKGROUND USC and Library System Overview
The University of Southern California, founded in 1880, is the oldest and largest private research university in the American West, approximately 14,000 undergraduates in a total enrollment numbering some 28,000. In the five year period preceding systems development of USCInfo, the USC library system comprised some 16 libraries, including: College Library (principle undergraduate library); Doheny Reference Center; several larger libraries — science, engineering, business, public administration and foreign affairs; and other main campus smaller libraries. As a member of the Association of Research Libraries (ARL), there was continual pressure on resources to keep pace with a rising flood of materials, that is, books, journals, and ever increasingly, other non-traditional media-based materials. Separately, library administration sought to improve the quality of a collection that was historically underfunded. The USC Library budget is affected by the University’s fiscal health, for example, static or downtrending enrollment growth directly impacts library budgets.
U.S. Library Automation
Libraries make a variety of finding tools available to their users, for example, directories, indexes, and abstracting services. Two that are ubiquitous across libraries are a library’s catalog and periodical indexes. Research libraries are driven, in part, by the “information explosion”, that is, the need to provide improved capabilities for locating both catalog and periodical source material in the face of geometrically increasing numbers and types of publications. The decade of the 1980s witnessed a concomitant rate of growth in the number of libraries digitizing catalog availability as an Online Public Access Catalog (OPAC). Digitizing a library’s catalog can be thought of as an item-specific type “inventory” problem. Each catalog item’s particular characteristics (book, journal, government document, etc.) must be described to an established level of detail. Item “status” must be tracked and reported for inquiry purposes, for example, being cataloged, recalled, checked out, lost/stolen, and so forth. Additionally, patron data must be combined with item data. The decade of the 1980s also witnessed increased digitization of periodical indexes by various methods such as dial-up time sharing services and CD-ROM-based periodical indexes. Prior to USCInfo, library patrons were offered dial-up periodical indexes search access. Such searches, performed at each library campus site, were librarian mediated, that is, required to be performed by a librarian professional. Costs were subsidized by USC, patrons paid a nominal fee, and access was limited. Library automation planning
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
USCInfo 3
documents cited the exponential growth in USC dial-up searches as a driving impetus for development of a USC-based digitized periodical indexes search capability.
USCInfo — Early Prototype Development Formal Organization Prototyping. USCInfo began with a $1 million grant from the Ahmanson foundation. Principles embodied within USCInfo from the outset, included: • • • •
Unified User Interface. Systems should support a unified user interface across all resources. Wide Selection of Resources. Access should be provided to a wide selection of information resources. Gateway Beyond USC. Systems should provide a gateway to the information-rich environment beyond the institution. Ease of Use. Systems should be easy to use.
The USC Library wanted to provide the campus community with unlimited (free) access from any location, any time of day. In a multisite library system, the only cost effective way to provide such a capability is to (1) either purchase and modify, or (2) custom develop, both the user interface and search engine software, to be run on the university’s computer. Vendor supplied periodical indexes database tapes would be “locally mounted” on the university’s computer.5 The Ahmanson project established the dual responsibility6 of both University Computing Services7 (UCS) and the library for USCInfo. The Ahmanson project began development of the formally recognized USCInfo, a vt100 version. The library’s Linda R. from the Library Automation Development Unit (LAD) and UCS (Kurt B.), working together (see Table 4), established a pilot prototype periodical index search capability, with a subsequent, second phase, small scale operation at four non-library “satellites” (computing facilities set up by the library). The four satellite locations isolated USCInfo from USC Library personnel as a whole. Although such isolation was not desirable, both the rudimentary remote access technology and the lack of network wiring in the libraries hindered large-scale library staff involvement. Informal Organization (de facto) Prototyping. Working virtually independently of, and largely unknown to the Ahmanson project for several months, Tim H. and Brad G., of the Center for Scholarly Technology (CST), undertook development of a HyperCard graphical user interface (GUI) USCInfo, that is, Tim and Brad constituted an “informal organization” (see glossary). The “Tim-Brad” HyperCard USCInfo had similar (although not identical) functionality to the official vt100version USCInfo and was available on 20 Apple SEs exclusively in College Library, in lieu of the vt100 USCInfo. Notwithstanding frequent system crashes and reloads, the HyperCard-based front-end had established preliminarily its connectivity to a mainframe. In addition, the College Library’s largely undergraduate student users felt it offered vastly improved ease of use and intelligibility. HyperCard USCInfo would become a de facto pilot project for subsequent USCInfo development, and Tim and Brad its most vocal champions (see glossary).
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
4 Klempa & Wegner
SETTING THE STAGE USCInfo Project Externalities
The paced and measured development of the vt100 USCInfo periodical indexes search capability changed with the departure of Mary L. who had established and spearheaded the early Ahmanson project, the appointment of Jim R. as Acting Deputy Assistant University Librarian for Academic Information Services (AIS), and the library’s OPAC crisis. Phased conversion of USC’s manual card catalog to an OPAC (called Homer8) began prior to the Ahmanson Project. When Homer’s public access search module was brought online, Homer experienced severe peak loading problems, that is, searches could take minutes to complete at busy times. Deeming the Homer OPAC problem both unacceptable and untenable, senior library administration directed the recently promoted Jim R. to determine available alternatives. In addition, the University administration was directing the library to prepare a five year projection, with options, of totally inclusive automation costs, beyond just a combined OPAC and automated periodical indexes retrieval system. It was clear that such longer term automation planning was essential to get university backing for future library automation proposals. Jim R. considered his fewer-in-number alternatives. An obvious alternative was to upgrade the existing GEAC 8000 system to a GEAC 9000system (see Table 1). Considerations set forth in Table 1 notwithstanding, the senior university financial officer, citing the fairly recent $300,000 expenditure for the GEAC system, expressed very strong reservations about the approximately $500,000 “hard” money-based proposed GEAC upgrade. Another option was an “all vt100”, that is, non-intelligent terminals approach. A comparison of this approach to the Mac HyperCard approach is given in Table 2.
Table 1. Upgrading to GEAC 9000 System Pro
Con
Existing relationship with vendor; would give substantial support
GEAC 9000 new machine, largely untried in marketplace
Resulting OPAC would not be “interim”, but treated as longer term
GEAC 9000 incompatible with GEAC 8000 Full project migration needed Expense is “hard” money; university administration unlikely to approve GEAC 9000 solution only addresses OPAC, not periodical indexes searching There would be incompatible networks, that is, GEAC 9000 — dumb terminals; USCInfo — vt100 terminals
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
USCInfo 5
Table 2. vt100 USCInfo vs. HyperCard USCInfo Characteristic/ Requirement
DOS/vt100
Mac HyperCard
Machines
Cheaper, but no subsidy/gift
More expensive, but substantial Apple gift likely
Interface
Quicker to adapt to existing interface, but text-based
Innovative graphical user interface, requiring more extensive development
Log-on Procedure
More difficult to implement less secure
Log-on procedure and hidden in stack; more secure
The Apple Appeal
The Homer crisis began both the formalization of CST’s involvement with USCInfo and the emergence of Apple Computer.9 Through CST directors, Apple Computer expressed interest in a substantial equipment grant (some $800,000) to the library for a HyperCard-based USCInfo. This grant included 165 Macintosh SE/30 computers,10 serving as replacements for all existing 150+ OPAC terminals, as well as related hardware. In terms of economic feasibility, the Apple equipment grant would make the project financially possible from the University’s perspective. USC’s contribution also would be substantial (approximately $500,000 total, both hard and soft monies), reflecting purchase of necessary network software, systems development costs, and implementation costs. A major portion of the $500,000 included necessary FTE systems development personnel expenses. The perceived “availability” of a HyperCard USCInfo alternative began to influence decision making, that is, the HyperCard-based front-end USCInfo had demonstrated some capability as a network interface. No formal technical or operational feasibility evaluations of the “Tim-Brad” HyperCard USCInfo were conducted at this time, for example, analyses of system crashes, system response times, robustness of the HyperCard interface, machine performance under conditions of poorly formed searches, and so forth. Table 3 sets forth foreseen HyperCard benefits as well as drawbacks. Brad G., the early HyperCard champion, urged its use as an applications prototyping tool only, with a production version to be written subsequently in a high level language, for example, C. In sum, the proposed HyperCard interface carried with it substantive technical, operational, and schedule feasibility issues. Jim R. took CST’s interim-step, HyperCard prototype and proposed a bold, encompassing view which reflected creativity, daring, and a willingness to take risks. The Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
6 Klempa & Wegner
Table 3. HyperCard considerations HyperCard Advantages
HyperCard Disadvantages
Sophisticated graphical design; enhanced menuing capabilities
Slower execution speed
Reduced host load; workstations do screen painting
Not suitable for multiple workstations requiring “bullet-proof” operation
More realistic distributed computing, that is, distributed host environments
Ease of alteration makes it susceptible to corruption from a curious hacker
Capacity to install “add-on” utility programs at workstation level
Every software release requires reloading all machines, one at a time*
Workstation software can be distributed free of charge to local user population
Beta release at the time of USCInfo implementation
Ease of alteration, enables design modifications to be tried immediately
Ease with which changes can be made contributes to “project creep”
*Later, well into the USCInfo HyperCard period, capability was provided that eliminated this necessity.
library’s OPAC would be consolidated into USCInfo, that is, an integrated online catalog and periodical indexes search retrieval system. Jim R. was trying to skip several iterations of system design, that is, a Ford Taurus solution.11 At the time, Jim R. considered the consolidation of the library’s OPAC with USCInfo as a major transition in the movement of the library’s Integrated Library System (ILS) to an IBM mainframe.12 The principle parties involved saw the possibility of influencing national library automation trends, with concomitant future USC Library automation benefits. As stated in Jim R.’s response to Apple: USC...has been working to develop a combined OPAC and periodical indexes, HyperCard-based, retrieval system...this innovative approach...moves beyond conventional terminal-host, ASCII-interface links. Jim R.’s radical, encompassing step would necessitate transporting the entire Homer OPAC, that is, 8+ million items from GEAC to the VM-based USCInfo system. Jim R.’s innovative approach required multiple serial and parallel tasks (see Figure 1). Combining both OPAC and periodical indexes retrieval significantly increased the project’s scope. Installation issues were also paramount, viz., Ethernet connections 13 would have to be installed in 10 libraries within four months, the 165 new intelligent terminals acquired and installed, and library staff completely retrained. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Research, Devt & Systems (1) Distributed Info Systems Academic Support Operations & Facilities Business Services User Services Network Management Microcomputing IBM Systems Systems Administration Network Software/Licensing Technical Support
University Computing Services
Vice Provost Academic Computing
CST Staff (Director) (2) CST Staff (Director) (3)
Center for Scholarly Technology
Provost
(reports to university president)
Position held by: (1) Kurt B. (2) Brad G. (3) Tim H. (4) Jim R. (5) Linda R.
Library Automation Development (5) Library Systems and Support
Public Service Collection Development Administrative Services Technical Services Academic Info. Services (4)
Vice Provost & University Librarian
USCInfo 7
Figure 1. USCInfo timeline
In sum, acceptance of Jim R.’s proposal would commit to completion of the proposed USCInfo within 18 months, using new machines (unfamiliar platform), a new interface, revised back-end, new communications software, and simultaneous addition of several new databases. The vt100 terminal emulation, remote user version of USCInfo, also would have to be updated concurrently, thus creating a parallel platform to be maintained until otherwise formally discontinued.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
8 Klempa & Wegner
Table 4. Project Planning Group (PPG) participant characteristics Participant Name
Unit
USCInfo Responsibilities*
Participant Characteristics
Previous Work Focus
Linda R. +
Library (LAD)
LAD responsible for Front End vt100 programming
Facilitator, good with people
Ahmanson Project vt100 USCInfo Periodical Indexes searching
Tim H.+
(CST)
F ront End HyperCard Programming
Bright, unseasoned
de facto, “informal organization”, USCInfo version
Brad G. +
(CST)
Front End HyperCard Programming
Personable. Iconoclast who didn’t worry about being liked
de facto, “informal organization”, USCInfo version
Jim R. +
Library (AIS)
Project Manager
Driven, heightened work pace, innovative
GEAC 8000 OPAC implementation
Kurt B. +
(UCS)
Back End Database structure, maintenance, BRS Search Engine
Intelligent, skillful; his analytical abilities moved ahead of group
Joint Responsibility with Library’sLAD unit for vt100 USCInfo development.
+ See Appendix B, for Rowe Decision Style + Refer to Figure 2, for position on organization chart + These individuals were evaluated by periodic Performance Evaluation reviews * Library’s System Support Unit was responsible for loading USCInfo databases
Jim R.’s proposal formally would make the three units — library, CST, and UCS responsible for USCInfo. Although all three units would be responsible formally, the de facto reality was that this approach would become the library’s, inasmuch as CST wanted HyperCard used only for prototyping and UCS had strong reservations about its use as a network interface (see Project Planning Group Participant Agendas section). In addition, Jim R.’s approach required the blending of differing organization cultures (Appendix A), demographic characteristics (Table 5), individuals’ decision styles (Appendix B), as well as Project Planning Group (PPG) participant characteristics (Table 4). Although Linda R. served as project facilitator, she did not have authority to impose solutions when system design conflicts arose. Jim R.’s “can-do” ebullience as well as de facto commitments made (or imposed) along the way acted like a dare on the group. Could they do it? Jim R. got a green light from the University Librarian to go ahead.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
USCInfo 9
Table 5. UCS, Library, CST demographics UCS
Library
CST
Predominantly male Computer Professionals Practical Experienced Many in leadership without degrees Younger
Predominantly female Librarians Conservative Experienced MLS degrees + other advanced degrees Older
Predominantly male Computer professionals Radical Creative, entrepreneurs Many with advanced degrees, not MLS Mostly younger
PROJECT DESCRIPTION Design Tradeoffs
Jim R.’s USCInfo design essentially traded off four considerations:
1. 2. 3. 4.
WHAT type of system do we want to create, WHO are our users, WHICH design factors will be given higher priority, and BALANCING how much the system should do for the user versus how much the user should be expected to learn.
Tim H. and Brad G., HyperCard champions, came down heavily on the side of ease of use: [O]ur goal was to create an interface readily usable by the general college student, without the student having to read a binder of instructions. An estimated one-half to two-thirds of USCInfo users each year were either new to the system or used it infrequently. Designing for such a primary user group immediately raised screen navigation issues:
• • •
WHAT is “intuitive” and “to WHOM”, HOW to accommodate both the sophisticated and unsophisticated search patron, that is, search and retrieval issues, and WHICH screen elements and their arrangement, that is, record display issues.
Both prevailing OPAC and periodical indexes retrieval designs utilized dumb terminals. In contrast, use of intelligent terminals would exploit the mainframe’s retrieval power and speed, as well as the workstation’s HyperCard GUI’s ease of use. Longer range enhancement of a GUI-based system seemed better achievable by creating a user Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
10 Klempa & Wegner
interface resident on the workstation, not the host. The Macintosh/mainframe design direction taken by USCInfo constituted a de facto rudimentary client-server-type architecture, thus presaging true client-server architecture in use today.
Intensive 18-Month Development
Jim R. projected completion in 18 months, that is, by the September start of the coming school year. This timeline was based on very optimistic estimates from programmers as well as best case scenarios for implementation and installation. As shown in Figure 1, the original 18-month timeline was compressed further, to 11 months (system completion date was not moved), because of delays in the start of system development. The principle parties involved did not seem as concerned about the 11-month timeline, in particular the impact of potential design bugs in elongating the 11 months and increasing frustration among the participants. After considerable discussion, a set of “basic functionality” was agreed to by all parties. Some system flowcharts and documentation were put together, although limited in nature. Development activities focused on a July 1 target date for a Beta release of HyperCard USCInfo (Figure 1, Task 36), with an almost parallel two step, four-month period for design and prototyping (Figure 1, Task 17). Installation activities were geared to September 1. At the commencement of systems development, staff working on USCInfo numbered 12 (not all worked full time). During the actual 11-month development period and initial year of operation (see later section — HyperCard implementation delays), the number grew to 19 (not all worked full time). Approximately 50% of the projected person hours utilized student labor, primarily doing HyperCard coding.
Stakeholder Perspectives
Figure 2 places the library, UCS, and CST within the overall USC organization. Table 5 synopsizes salient UCS, library, and CST demographics. Appendix A contains brief descriptions of the organization cultures of UCS, the library, and CST. UCS is a masculine organization (see glossary), that is, assertive in interpersonal dealings, with a high performance orientation. UCS’ organization culture is a dyad, developmental/rational. Developmental cultures value adaptability, autonomy, and creativity. Rational cultures are less likely to engage in satisficing decision making, but rather to take a longer term, “maximize”, viewpoint, that is, long term system effectiveness. The library’s culture is a dyad, consensual/hierarchical. Consensual cultures value cooperation, fairness, and openness. Senior library management’s culture is hierarchical, that is, oriented toward centralized control and coordination. Identifiable library subcultures include: library administration (assistant university librarians and dean), heads of individual library units, and the divisions (public service, cataloging, AIS). Additionally, those in the “nontraditional” functions (library automation, systems development, and computer services) constituted a subculture distinct from “traditional” library functions (user services, cataloging, collection development). At the time of USCInfo development, the nontraditional subculture was poorly understood and not fully appreciated. CST’s initial umbrella-like charter as a research and development center anticipated pursuit of projects “designed to enhance the use of information technology in instruction and scholarship”. CST reported jointly to both the dean of the library and vice Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
USCInfo 11
Figure 2. Brief USC organization chart ID
Task Name
Duration
1
Database Preparation (GEAC and others)
172d
8
Mainframe Programming
71d
17
Hypercard Design/ Prototype
85d
24
Facilities and Equipment
132d
28
Network Design and Installation
100d
31
VT 100 Design and Development
95d
36
Hypercard Software Development
70d
49
Evaluation
80d
54
Training and Integration
40d
Documentation
40d
SYSTEM LIVE IN UNITS
0d
6 0 and Help Material 64
Quarter
11
12
1st Quarter
1
2
3
2nd Quarter
4
5
6
3rd Quarter
7
8
9
10
provost for Academic Computing (Figure 2). CST staff, assigned to specific projects, generally were titled “directors”. CST’s culture, a developmental/hierarchical dyad, valued creativity in problem solving, while at the same time pursuing coordination and control responsibilities. The UCS, library, and CST demographic and organization culture differences contributed to USCInfo project frictions. UCS viewed “technology” as its bailiwick. The library staff in general viewed themselves as “different” than technology types, that is, both UCS and CST. CST was viewed as “outside” the library, with a mission and purpose that was “fuzzy”.
The Project Planning Group Participant Agendas University Computing Services. UCS, operations and technology oriented, viewed the library as another client among many. In the role of “network cop”, UCS informed the Project Planning Group of what could or could not be done on the campus network and Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
12 Klempa & Wegner
Figure 3. Library Review Committee organization AUL for Academic Information Services
Library Review Committee
Library Public Services Faculty and Staff
Technical Services Faculty
User Representative
USCInfo Team
Library
University Computing Services
Center for Scholary Technology
mainframes. UCS expressed serious reservations about the HyperCard project from the get-go, centering on network behavior and robust operation, for example, developing telecommunications software based on a beta release HyperCard.14 UCS strongly favored developing a GUI using X-Windows-Unix only, which while platform specific at the time, was deemed by UCS as far more transferable in the future. The practical effect of UCS’ “position” with respect to HyperCard implied minimal implementation support from UCS. Although UCS nominally was collaborating in the HyperCard implementation, it would have required a saint’s forbearance not to be a bit gratified in subsequent months as the project implementation difficulties began to arise. The Library. The library was often frustrated by what it perceived as the technology’s restrictions. A Library Review Committee (Figure 3), consisting of library faculty, staff, and an outside user representative, advised on interface and content issues. The Library Review Committee functioned as a “bridge”, serving to both solicit end-user inputs to the design process and, when necessary, communicate design approach particulars and feedback from the PPG. Meetings were run by Linda R. or Jim R. Typical exchanges (heavily paraphrased), as highlighted below, often arose: [Example] Committee Member (CM): I’d like to see all the library locations listed in a, what do you call it?, pull out menu, so the user can pick the library they want before they do a search. In GEAC, the patron chooses the library before searching.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
USCInfo 13
Linda R. (or Jim R.): Well, that’s tricky to do. We’d have to write a program to produce the pull-down menu and then send that information to the mainframe and then get the results back... CM: Well, what IS possible then? [Example] Linda R.: How about icons to represent different functions, help, quit and so on? CM: I don’t like icons, words are clearer. GEAC doesn’t use icons and it works just fine. CST. CST, the new kid on the block, was ill understood by either the library or UCS. Some staff in UCS felt all the funds for development had been drained away to support this “isolated” unit whose responsibilities “somehow” included technology. This nebulous and isolated unit now came face-to-face with UCS within the Project Planning Group. The library suspected that funds had been given to CST that were needed in other areas, for example, collection development.
The Project Planning Group Dynamic
Linda R. prepared and distributed weekly design team meeting agendas, conducted weekly design team meetings, kept minutes, and tracked implementation progress on decisions made. Linda was regarded as being able to communicate effectively with all team members, often interpreting one group to another. Jim R. was usually in attendance and also took a major role. He came into frequent conflict with both UCS (Kurt B.) and CST (Brad G.). The 11-month timeline (Figure 1), that is, stress overload, contributed to the often tensions and disagreements between the major participants in the development process. Persistent problems generated both intense reactions, for example, “that is unacceptable — fix it now”, and often passionate responses to such reactions. As the development process unfolded, requiring the fitting together of many untried pieces, bugs and problems arose. At times, meetings degenerated into name calling and finger pointing as each side was convinced the problem was caused by another group’s sub-project. Conflicts often arose with Brad G. from CST, as in the following (heavily paraphrased) typical interchange from a PPG meeting: Linda R.: In the Mac version, records are being returned that have title in the author field, author in the title field, and aren’t from the same item anyway. Brad G.: The stack parses the data stream by pairing author (AU) and title (T) fields, starting a new record any time an AU (author) field is detected. The problem is the mainframe is sending data fields out of order and needs to be fixed.
Kurt B.: No, that’s not it! Some records do not have an author field at all [draws diagram on board]:
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
14 Klempa & Wegner
Mainframe sends: AT AT Mac expects: AT AT Mac displays: AT AT
T
AT
T
AT
AT
AT
TA
TT
AT
AT
The Mac needs to be fixed to use the unique identifier field to delimit records, not the author field, which may not be present. [Note: Brad G., while an experienced programmer, isn’t aware that some items will not have an author field attached, something which is obvious to librarians.] During the USCInfo development period, the small size of the PPG, its weekly and highly personal interaction modality, and its composition of highly experienced and skilled professionals, were considered as strengths. During system development, the functioning of the group could be described as “hands-on,” providing both mobility and flexibility in decision making as details of design considerations arose. There were differences from the traditional systems design situation: • • • •
Unlike the typical “end user”, library PPG members were intimately familiar with data elements and logical processing from a librarian’s point of view. UCS PPG members were highly skilled and experienced database programmers, whose expertise could be readily applied. The small PPG team size, ranging between 8-12, facilitated decision making, that is, enabled a “strike-force” mode of operation. The design activity sequencing incorporated a certain logical structuring of the system definition.
The weekly, personalized design interaction process was in lieu of a more formalized and elongated-in-time structured design process, which was not utilized. The limited “paper” design of the system, coupled with the weekly PPG dynamic, in part created the effect of designing for a “moving target”. In addition, the PPG group had to deal with conflicting instructions and priorities from both library and UCS administrations.
HyperCard Implementation
The PPG dynamic began to present problems during the implementation phase of USCInfo. Both a beta HyperCard USCInfo and production vt100 Version 2.0, were released in early October of the school year, one month after the targeted release. HyperCard USCInfo was available on the first thirty networked machines in College Library.15 Each incorporated “basic functionality”16 agreed to by all parties. Unlike the vt100 release, the HyperCard implementation was very difficult, viz., serious problems with Ethernet and the HyperCard stack involving response time for printing/downloading17 and maintaining connectivity.18 Most of the first several implementation months were devoted to determining problems, prioritizing them, and solving them. When operating “bugs” surfaced, causation of the problem had to be determined, that is, a Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
USCInfo 15
network problem, a software (HyperCard) problem, or a database problem. For multifaceted problems, for example, connectivity between HyperCard and the network, multiple “ad hoc” meetings with smaller sub-groups, task forces, programmers, and users were necessary in order to isolate and fix these problems. The triad management responsibility — library, UCS, and CST — impaired ability to impose responsibility for problem solutions to operating bugs.
HyperCard Interface Implementation Delays
The problems with the HyperCard USCInfo delayed development of the interface as a whole. Basic functionality was in place, but anticipated enhancements had not even been begun, including: • • • • •
LIMITS — Limit a search to items with a particular characteristic, for example, a location, language, or type of material. BRS SEARCH ENGINE could not distinguish if there was more than one value in the limit field. Thus, the user could only limit for the first listed location. HELP SCREENS — While a help command (vt100 version) and help button had been designed in the interface, the help software did not exist. TRUNCATION OVERFLOW — The search engine allows truncation of search terms.19 First partial results were given as the complete response. 20 INSTALLATION of additional workstations was behind schedule, that is, wiring, training and manpower problems.
Of the total programming effort, that is, basic functionality and wish list of enhancments, the wish list accounted for approximately 50%. The USCInfo implementation spotlighted the difficulty and cost of maintaining two parallel interfaces.21 Problems in one system delayed the other’s releases. Keeping functionality the same became difficult. Project creep, that is, previous wish list, coupled with the persistent HyperCard response time issues, software crashes, and HyperCard beta releases (reaching Beta version 34) placed a tremendous strain on the support infrastructure. Linda R. worked with Library Systems and Support to triage systems development responses to the wish list, “exceptions” and/or severe problems, and develop ad hoc policies and procedures as necessary. Linda R. supervised the day-today operation of the system, coordinated the frequent reloads of HyperCard software releases, disseminated “USCInfo bulletins” to library staff, and worked closely with Library Support Services, that is, “Hotline” staff.
For Whom the Bell Tolls
After some 12 months of operation, both the intractable nature of the HyperCard response time and the growth in convoluted programming using “xcommands” (see glossary), forced a decision about parallel USCInfo systems. The vt100 version, although less innovative, could be used everywhere. HyperCard, once considered promising, seemingly did not achieve operational robustness. Some supporters saw HyperCard abandonment as losing learning that had gone into the GUI as well as undermining USC’s leading edge in library technology. HyperCard’s champions made a spirited defense, albeit “too little, too late”, that is, attempting development of a C++ version of USCInfo Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
16 Klempa & Wegner
called “NewInfo”. UCS, strongly opposing further development using either HyperCard or C++, promised 100% programming support for a Unix-based, GUI USCInfo. Jim R. decided to discontinue HyperCard development and remove the interface by the conclusion of the current school year (summer semester). The new emphasis would be on content, that is, not interface. The USCInfo vt100 version would be a short term vehicle, while the library pursued development of an Xwindows graphical interface longer term. The continuing development of USCInfo was domiciled within the library’s LAD group, until CST subsequently was brought into the library organization and formally merged with LAD.
Preparation of vt100 Version for Library Use
Modifications of the existing vt100 version included those pertaining to printing/ downloading choices, command lines, search constructors, interface, Macintosh “shell” software, and integration with UCS written software. The plan is to treat this vt100 revision as a guerilla operation involving a small, top– level group. Please come prepared to work smart. (Team memorandum from Linda R.)
Both leveraging of the designers’ knowledge base, as well as synergies from the now domiciled-in-LAD USCInfo Technical Group22 began to emerge, as evidenced by the four-month vt100 conversion and implementation.
CURRENT STATUS OF THE PROJECT Migration to Unix
Planning for USCInfo’s migration to Unix began almost concomitantly with the vt100 conversion effort. Unix advantages included: • • • • • •
Open systems architecture; Less expensive disk space and processors; faster execution; Databases distributed across servers, protecting against system failure and bandwidth problems; More compatible access to the Internet; Potentially compliant with the Z39.50 (ANSI) information-retrieval standard; and More flexibility for future system enhancements.
Disadvantages of Unix included: a complete rewrite of USCInfo, conversion of all databases and setup files, total reformulation of system maintenance procedures, retraining of both development and library staffs, and documentation rewriting. In sum, Unix presented new opportunities as well as limitations. A tight 9 1/2-month timeline, that is, design, migrate, implement, and test the system was completed on time as scheduled. The Unix platform-based USCInfo has run robustly, that is, largely trouble free and its “look” has remained the same. A substantive innovation was the addition of “gateways” or links from USCInfo to external information resources. The growing availability of databases freely available on the Internet (Telnet, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
USCInfo 17
Gopher, other vendor-based products) has made it easier and cheaper to link to a system rather than invest in the next generation USCInfo. The concomitant growth in such external information resources has pressured the library to take these beginning linkage steps, although budgeting and staffing for further development of this capability have not kept pace. Linking to these external resources means that the user must learn a variety of non-unified interfaces and techniques. As of this writing, it is clear that USCInfo must be totally redesigned, that is, the innovative character-based interface is now “old news”. Acting Director of Libraries, Larry S. recently promulgated that USCInfo is a concept, not just an operating platform or interface: USCInfo must be understood as what it has become: the campus-wide information system, regardless of how structured or administered.
WWW Era
The phenomenal growth of the World Wide Web (WWW) finally brings the hope of a true graphical user interface within reach. Table 6 encapsulates WWW implications for USCInfo. The original vision of one interface across many platforms, wide selection of resources, ubiquitous access, incorporation of information in all media and ease of use is now validated. Through many revisions and architectural changes, the goal has remained clear: to provide the user with information, whenever and wherever needed.
Table 6. WWW interface comparisons Pros
Cons
Combines advantages of “home grown” and vendor system; local control of interface AND support of vendor*
User needs high end machine (whatever platform), Web browser software, Internet connection, and computer skills
Multiplatform capability Ability to display multimedia, for example, images, graphs, audio, video, and so forth
Provision must be made for characterbased access to everything now on the system
Far easier interface development and change (current system requires programmer time, compiling, and releasing code at slack time)
Many users not Web-capable, and never will be
Web technology widely accepted and growing exponentially Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
18 Klempa & Wegner
SUCCESSES AND FAILURES Successes 1.
2.
3.
4.
5.
6.
USCInfo was established, overcame earlier implementation difficulties, brought online additional databases each year, and has achieved a robust operating level. Throughout, USCInfo has been free of charge to the USC community. Contributory factors: dedication of both UCS and the library to make system usable and reliable; system development team composed of highly knowledgeable “doers,” who were results oriented (see Successes that follow, academic environment). Both a streamlined USCInfo development team and subsequent formalization of CST’s position within the library have enabled synergies and leveraging of the participants’ knowledge base as both technology migrations have occurred and USCInfo functionality has expanded. Contributory factors: Upon discontinuance of HyperCard USCInfo, shared responsibility for USCInfo was again dual, that is, UCS and the library (through LAD). CST, on an interim basis, was not in the picture. When brought back into the picture, CST was now domiciled in the library with a revised and substantially clearer mission. Throughout the post-HyperCard period, the USCInfo Technical Group has remained relatively stable, building close working relationships and facilitating migrations to new technologies as well as expansion of system functionality. Management of the USCInfo project has successfully navigated the academic environment. The shirtsleeve, “doers” modality of the systems development effort described in this case, essentially supported, de facto, an “end justifies the means” approach. Collaboration in academe stems, in part, from “for the greater good”. In contrast, team-based collaboration in the corporate world often is formalized through enforceable authority/responsibility boundaries and reviewable mileposts by management. Senior library management enabled the innovative dynamic to go forward. Contributory factors: Senior library management, recognizing Jim R.’s enthusiasm, intelligence, and drive, entrusted the USCInfo project to him. Jim R. was a still growing and proving himself manager. Library management approved systems development which called for more than 50% of the HyperCard programming to be completed by students. The Homer crisis created a “crash” program, which often generates “stress overload” conditions (see Failures section, decision making under stress overload). The combined OPAC and periodical indexes searching capability with a GUI did garner national attention within library automation literature and professional associations. Other libraries followed the lead, combining OPAC and periodical indexes searching. Contributory factors: The library was willing to accept the consequences of “the cutting edge is also the bleeding edge”, accepting both the risks and associated implementation considerations of HyperCard as a network interface. The USCInfo interface is easy to use, assures getting results, that is, retrieval of data and not error messages.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
USCInfo 19
Contributory factors: User comprehensibility, expectations, and skill levels were thoroughly addressed at each technology migration; screen presentation of results provides a cleaner, simpler screen look (see Failures section, poorly constructed searches).
Failures 1.
2.
3.
Selection of the text-based USCInfo interface has limited kinds of materials accessible by the system. USCInfo has not kept pace with the growth curve in GUI, that is, current advances in interface design have not been exploited. Contributing factors: resource and funding constraints. Ease of use of the interface doesn’t offer any protection or warning against poorly constructed searches. Contributory factors: non-library-based USC access means the bulk of users generally cannot consult library professionals, hence not realize that the search results are poor.23 Decision making under stress overload often produces unintended consequences of varying magnitude and/or overall impact. Contributory factors: Jim R. was willing to accept an almost thankless task, that is, “we needed it yesterday”, “monies to fix it are scarce”, and “you are the wagonmaster of this three-horse team”. Both Jim R.’s and Brad G.’s decision-making styles are conceptual/directive, that is, both have a big picture view, and both want results. When superimposed on the time element of “we needed it yesterday”, these decision-making styles would likely contribute to “stress overload” decision making as the project unfolded.
EPILOGUE AND LESSONS LEARNED Lessons Learned 1.
2. 3.
4.
Innovative processes within an academic environment, whose output is to be a “production” output, that is, not pure research, need a(n) appropriate authority/ responsibility mechanism(s). The particular form(s) of the mechanism(s) is(are) probably less important than the viability and enforcing processes of such mechanism(s). Reward/reinforcement mechanisms need to be tailored, consistent with both the position of the person within the project and any dual authority/responsibility structures imposed. The relative ages and shorter times in positions at USC, of project participants — Linda, Tim, Brad, Kurt, and Jim, to some degree contributed to both a “we can do it” spirit and, sometimes, less reliance on formal evaluation processes/methods being applied to various system development processes. The basic functionality that had been agreed to was, de facto, a rather encompassing system. What had been originally thought of by Tim and Brad as a HyperCard
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
20 Klempa & Wegner
prototype mushroomed into development of a much more complete system. Stated differently, “Think bigger longer term, prototype smaller shorter term”.
ACKNOWLEDGMENTS
M. Klempa gratefully acknowledges the help of Lynn Sipe, who was extremely receptive to this project, as well as coauthor Lucy Wegner, who reviewed original documents and materials, drafted earlier manuscripts, responded to varied and numerous requests, and enabled the process in countless ways. Both authors generously thank the assistance of many past and present members of the USCInfo “team” who generously opened their files, shared recollections and reviewed portions of the manuscript.
REFERENCES
Amabile, T. M. (1988). A model of creativity and innovation in organizations. Research in Organizational Behavior, 10, 123-167. Bartunek, J. M. (1988). The dynamics of personal and organizational reframing. In R. Quinn, & K. Cameron (Eds.), Paradox and transformation. Cambridge, MA: Ballinger Publishing. Howell, J., & Higgins, C. (1990). Champions of change: Identifying, understanding, and supporting champions of technological innovations. Organizational Dynamics, 19(1), 40-55. Keon, T., Vazzana, G., & Slocombe, T. (1992). Sophisticated information processing technology: Its relationship with an organization’s environment, structure, and culture. Information Resources Management Journal, 23-31. Klempa, M. J. (1995a). Understanding business process reengineering: A sociocognitive contingency model. In V. Grover, & W. Kettinger, (Eds.), Business process change: Reengineering concepts, methods, and technologies. Harrisburg, PA: Idea Group Publishing. Klempa, M. J. (1995b, October). Management of innovation: Organization culture and organization learning synergistic implications. In Q. Xu, & J. Chen (Eds.), Proceedings of Pan-China Symposium on Management of Technology and Innovation, Hangzhou, P.R.C. Quinn, R. E., & McGrath, M. (1985). The transformation of organizational cultures: A competing values perspective. In P. Frost, L. Moore, M. Louis, C. Lundberg, & J. Martin (Eds.), Organizational culture. Newbury Park, CA: Sage Publications. Rowe, A. J., & Mason, R. O., (1987). Managing with style: A guide to understanding, assessing, and improving decision making. San Francisco: Jossey-Bass. Wegner, L. S. (1991, November). Multiple resources with a unified interface. T.H.E. (Technology Horizons in Education) Journal, (Special Issue), 6-12. Wegner. L. S. (1992, Fall). The research library and emerging information technology. New Directions for Teaching and Learning, 51, 83-90. Wilkins, A., & Dyer, W. (1988). Toward culturally sensitive theories of culture change. Academy of Management Review, 13(4), 522-533.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
USCInfo 21
ENDNOTES
1
2
3
4
5
6 7 8
9
10 11
12
13
14
15
16
17
18
19 20
21
22
Noncopyrighted portions of USCInfo are available on the Internet at http:// library.usc.edu. The University’s home page is: http://www.usc.edu. A heuristic refers to a “rule of thumb” or a “rule of good guessing”. Managers may employ heuristics in order to deal with decision making complexity. Satisficing means that decision makers choose a solution alternative that satisfies minimal decision criteria, even if better solutions are presumed to exist. Decision-making situations may be ambiguous and have several interpretations, that is, equivocality. Managers reduce equivocality through shared meaning and interpretation. CD-ROM periodical index databases would not be cost effective in a multisite situation. This dual responsibility structure was cemented in a contractual agreement in 1993. UCS is responsible for providing and maintaining all campus networks. Homer was, of course, a Greek poet. USC’s tradition was to use Greek names for programs, teams (Trojans), and the like. Apple Computer marketed itself, in particular, to educational institutions at all levels, K through university. During implementation the machine grant was 126 PCs, mostly SE’s. Ford Motor Company touted the Taurus as “skipping several design generations and being “tomorrow’s car”. This plan never came to fruition. A subsequent university decision moved the entire campus from mainframes to a Unix-based client server architecture. The library paid for Ethernet connections to 10 library buildings to wiring closets. Use of Fastpath from there was a cost/speed tradeoff. Fastpath’s cost was less. Both the network communication and mainframe transactions to be coded later in HyperCard were set up as X-commands. Installation of the 126 Macintoshes was phased, with 30 Macintoshes installed as of October, and installation of the remaining 96 within 12 months. For example, not until 12 months later were subject headings and browsing capabilities implemented within USCInfo, that would permit full conversion of HOMER. The HyperCard stack was very slow to process screen display results from the mainframe, that is, Fastpath is slow, especially when busy. Mainframe response had been upgraded soon after USCInfo implementation. Prior to the 20 records at a time retrieval limit, HyperCard retrieved the entire result. Large sets created long waits; extremely large sets caused HyperCard to overwrite itself when it ran out of memory, thus losing connectivity. Librar$ retrieves library, libraries, librarian, and so forth. The iterative results problem only came to light because the Mac and vt100 versions were set to display different size partial search results. A third version, that is, remote user HyperCard USCInfo was not released until one year later. The former USCInfo Project Planning Group’s name was changed to the USCInfo Technical Group.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Hierarchical
Predictability
Rational
Complexity
Focus
Note: There is not any one best organization culture style. The four organization culture styles describe ideal style types. An actual organization’s or organization subunit’s culture may more or less closely resemble an ideal type. Style combinations are possible, for example, dyads such as developmental/ rational and so forth. Style triads are possible.
Simplicity
Focus
External
Developmental
USC Classifications: Library — Consensual/Hierarchical UCS — Developmental/Rational CST — Developmental/Hierarchical
Further reading: Quinn and McGrath (1985) and Klempa (1995a, 1995b)
Developmental Culture. In a developmental culture, intuitive information processing (insight, invention, innovation) is assumed to be a means to the end of revitalization (external support, resource acquisition, and growth).
Rational Culture. In rational cultures, individual information processing (goal clarification, logical judgment and direction setting) is assumed to be a means to the end of improved performance (efficiency, productivity).
Consensual Culture. In the consensual culture, collective information processing (discussion, participation and consensus) are assumed to be a means to the end of cohesion (climate, morale, teamwork).
Hierarchical Culture. In the hierarchical culture, formal information processing (documentation, computation, and evaluation) is assumed to be a means to the end of continuity (stability, control, and coordination).
23
Internal
Consensual
Flexibility
22 Klempa & Wegner
Broad retrieval requests, for example, “Civil War” retrieve large data sets, for example, 2800+ hits. The user may be unable to refine the search, hence not be able to further limit the retrieved set.
Appendix A. Organization Culture Styles (adapted from Quinn & McGrath, 1985)
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
High Complexity
Task
People
Considerate
Conceptual
Analytic
Directive
Right Brain
Note: There is not any one best style. If possible, the individual’s style should be matched to the situation. The four styles describe ideal style types. An actual individual may more or less closely resemble an ideal type. Decision style combinations are possible, for example, dyads as well as triads. Many successful CEOs’ style is conceptual/directive. Under conditions of stress overload, individuals generally revert to the directive style.
Low Complexity
Leader Manager
Left Brain
USCInfo personnel classifications: Linda R. — Considerate Jim R., Brad G. — Conceptual/Directive Kurt B., Tim H. — Analytic/Conceptual
Conceptual Style. This style prefers variation, risk, excitement, growth, and is future-oriented. Idea generation is often through intuition. Problems tend to be analyzed from a dynamic, longitudinal view. Decision making has a multiple focus. This style is oriented toward creativity, risk, adaptability and external legitimacy.
Considerate Style. This style is oriented toward feelings. Meaning is discovered through process; the world is known through human interaction. This style emphasizes individual exceptions and spontaneous events. This style values affiliation and emphasizes consideration of the individual.
Directive Style. The directive style prefers short time lines, high certainty, values achievement, is purposive, relying on a priori logic and general principles or rules. This style makes rapid decisions, with a single purpose and focus, that are final. This style emphasizes action. Indiviudals often revert to this style under stress overload.
Analytic Style. This style emphasizes gathering large amounts of data so as to undertake a systematic examination, focusing on a maximum or optimal solution.
USCInfo 23
Appendix B. Rowe Decision Styles (adapted from Rowe & Mason, 1987)
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
24 Klempa & Wegner
Appendix C. Questions for Discussion 1.
2.
3.
4.
5.
6.
7.
8. 9.
Evaluate the effectiveness of the shared responsibility, Library and University Computing Services, for USCInfo development. Set forth pros and cons as to the manner in which this shared responsibility was implemented. What adjustments would you have made? Why? Given the organization culture styles and organization learning styles of the Library and University Computing Services, discuss the effectiveness of the Library Review Committee and its interaction with the weekly systems development process. Identify possible mechanisms that may have helped moderate the relative lack of technological sophistication of both senior library management and the larger Library staff. Given the HyperCard capability to relatively quickly generate prototype inputs/ outputs, discuss methods of monitoring and controlling project creep. Given the dual shared responsibility for systems development, discuss how/why your methods would work. USCInfo development did not use a structured design approach. Given the smallsized PPG team composed of high knowledgeable and specialized Library, CST, and UCS personnel, compare and contrast the effectiveness of the systems development process used with the structured design approach. Would a structured design approach have improved the result, or would it have “killed the goose”? Given the design trade-offs presented in the case, and the relatively slow growth in user sophistication over time, discuss the choice of HyperCard as the chosen vehicle. Could any of the parallelism problems (GUI and vt100 versions) have been anticipated and/or addressed? The original Ahmanson project four principles finally are fully realizable in the mid’90s. Were the Ahmanson project’s four principles visionary or premature? Discuss in light of very short technology life cycles and evolutionary versus revolutionary technology innovations. Identify circumstances/situations in the case where the 80/20 concept (Pareto’s Law) could have been applied in order to improve development, implementation, or operating circumstances. Be specific in your response. Show how/why 80/20 would have improved the circumstance/situation. Where there any circumstances in the case where the 80/20 concept was being applied? Compare and contrast technical, economic, scheduling and operational feasibility of the three choices confronting USCInfo: the all vt100 approach, the GEAC upgrade approach, and the proposed HyperCard approach. This question has two parts that can be answered either together or separately. Hypothesize that you are currently director of Information Technology for the USC Libraries. The University’s chief information officer has requested an outline of your plans to move the USCInfo system into the new century, specifically considering World Wide Web access, a multiplicity of information resources, a demanding user community, and of course, inadequate time and budget. You can address this in either or both of the following ways:
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
USCInfo 25
a.
b.
Technical — Systems analysis approach. Given user access to upwards of 100 or more databases, many of which the user knows nothing about, explain or delineate how you would go about fundamental system design so as to: (1) facilitate the novice user’s search process, that is, retrieve useful information, without a long, tedious search process; and (2) give the experienced user access to the powerful search capabilities built into the system. Managerial — Administrative approach. Many of the past difficulties with USCInfo development stemmed from shortcomings in the administrative structure and systems development process. Consider the following constraints and opportunities and delineate how you would structure the responsibility for and development of the new USCInfo system. • • • •
The system must be operational (if not fully developed) in time for the fall semester (10 months away). Any money for new equipment must be taken from current operating funds. Each of the three parties remains (UCS, CST, Library), albeit in revised organizational relationships (see case, Successes, #2). The level of technological sophistication has risen greatly within the Library. Leadership in the digital library realm is still a major value for the Library, so some risk-taking is acceptable.
Appendix D. Glossary Champion. Champions are transformational leaders, espousing ideologies and beliefs different than the established order, that is, advocates of change in the organization. Champions contribute to ideological and belief reorientation of individuals in the organization. Championing processes in the organization can be rationally based, “participatively” based, or, alternatively, a renegade process. Further reading: Howell and Higgins (1990) and Klempa (1995a, 1995b). Informal Organization. The informal organization often serves to foster and nature creativity in an organization. The members’ collective core values may foster innovation. The informal organization may have reward mechanisms that encourage risk, communication networks that facilitate innovation, and role mechanisms promoting innovation, for example, champions. Masculine Organization. A masculine organization is “assertive”, less concerned about people. A performance orientation is associated with high masculinity. Organization
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
26 Klempa & Wegner
units characterized by low masculinity would generally be more cooperative and less competitive, that is, performance oriented. Further reading: Keon, Vazzana, and Slocum (1992). X-Command. An x-command enables unconditional branching, to a self-contained routine, with program control returned to the original branching point. Inappropriate use of this capability generally results in very “nested” coding.
Mathew J. Klempa is a consultant in the application of computer information systems. He has taught at the University of Southern California and the California State University. He was formerly a corporate planning officer for a major bank holding company, systems analyst at IBM, and operations research analyst at McDonnell™ Douglas. Lucy Wegner is interim assistant dean for information technology and director of the Center for Scholarly Technology in the University Libraries at the University of Southern California. Ms. Wegner’s career has focused on developing systems to access and organize electronic information resources. She is a frequent writer and speaker on the library of the twenty-first century, the future of librarianship, and technology innovation in libraries.
This case was previously published in J. Liebowitz & M. Khosrow-Pour (Eds.), Cases on Information Technology Management in Modern Organizations, pp. 132-155, © 1997.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Case Study of IT Chargeback in a Government Agency 27
Chapter II
A Case Study of IT Chargeback in a Government Agency Dana Edberg University of Nevada, Reno, USA William L. Kuechler, Jr. University of Nevada, Reno, USA
EXECUTIVE SUMMARY
In 1997 the Nevada Legislature mandated the formation of an IT division for the Nevada Department of Public Safety (NDPS). Prior to this time the 14 separate divisions within the department had carried out their own IT functions. The legislature also mandated that the full, actual costs for the IT Department would be allocated to the divisions on the basis of use, a form of IT funding known as “hard money chargeback”. Complicating the issue considerably is the legal prohibition in Nevada of commingling funds from multiple sources for any project, including interdivisional IT projects. Five years after its creation, there is a widespread perception among users that the IT Division is ineffective. Both the IT manager and the department chiefs believe the cumbersome chargeback system contributes to the ineffectiveness. This case introduces the concept of chargeback, and then details an investigation into the “true costs of chargeback” by the chief of the NDPS’s IT Division.
ORGANIZATION BACKGROUND
The Nevada Department of Public Safety (NDPS) is a state-level government agency responsible for coordinating all state responsibilities to protect the citizens of Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
28 Edberg & Kuechler
the state of Nevada in the United States. Many public safety tasks, such as police, fire, and emergency services, are left to city and county governmental agencies, but other safety-related tasks are the responsibility of the state. Figure 1 depicts the divisions within the department. The short descriptions that follow, of the functions and cultures of some of the individual divisions, will provide context for understanding the effect of the chargeback scheme on the department as a whole. Figure 1. Description of the divisions within the Department of Public Safety Criminal History Repository Provide NV law enforcement and other agencies with centralized, complete information. 32% of IT budget
Parole & Probation
Highway Patrol
Monitor and enforce offender compliance with the conditions of their community supervision.
Ensure safe, economical and enjoyable use of the highways by enforcing laws, educating the public and alleviating suffering.
20% of IT budget
19% of IT budget
Investigation
Traffic Safety
Fire Marshal
Provide criminal investigations, coordinate select law enforcement activities statewide, collect and disseminate information.
Plan and administer highway safety programs. Gather, analyze and disseminate state crash data.
Reduce the loss of life and property from fire and hazardous materials.
9% of IT budget
7% of IT budget
5% of IT budget
Emergency Response
Emergency Mgmt.
Parole Board
Protect citizens from effects of hazardous materials while supporting state goal of encouraging industry growth.
Anticipate impact of potential disasters and immediately mobilize a response.
Render fair and just decisions in parole matters.
2% of IT budget
2% of IT budget
1% of IT budget
Criminal Justice Asst.
Training
Capitol Police
Obtain and administer grant funds from the U.S. Dept. of Justice for programs involving drug trafficking and violent crimes.
Develop and implement programs to enhance career development within the department.
Provide for the safety of state employees, constitutional officers, and the general public when on state grounds.
1% of IT budget
<1% of IT budget
<1% of IT budget
Director’s Office
Professional Resp.
Establish policy for the department, direct and control operations of the divisions.
Conduct investigations into allegations of misconduct by commissioned officers. Provide training to peace officers.
<1% of IT budget
<1% of IT budget
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Case Study of IT Chargeback in a Government Agency 29
Criminal History Repository
The Criminal History Repository (CHS) is the largest user of IT within the NDPS (32% of the total budget) by a wide margin. This division collects, categorizes, and stores all the information on criminal activities gathered by any investigative agency in the state. It is also responsible for making this information accessible to all other investigative agencies, both state and federal. This information is stored on a Unisys mainframe computer located in the Public Safety Technology Division (PSTD) offices and maintained by PSTD personnel. Inquiries by any state agency into information repositories outside Nevada are also handled by the CHS. Although this division is primarily an internal service division for the NDPS, managers of this division have historically been sworn officers from one of the NDPS investigative divisions. CHS is funded primarily through fees, which makes it different than the other divisions in the department. While other divisions are funded extensively through state and federal monies, CHS relies on user fees such as business license searches and criminal records searches. For example, a person applying for employment in a casino must provide information to the potential employer about his or her criminal history and this information is generated for a fee from CHS. Since the population of Nevada has been growing rapidly, CHS was able to count on a steady increase of fee revenue. In the wake of the 9/11 tragedy, employment has decreased in many Nevada casinos and CHS is experiencing a downward trend in expected revenues. Thus, funding for CHS has become unpredictable, making it difficult to budget effectively for large, long-term IT projects.
Parole and Probation Division
The Parole and Probation Division is the second largest user of IT. All records of activity of paroled criminal offenders, current and past, are the division’s responsibility. From an IT function perspective, Parole and Probation has a large storage and retrieval requirement, similar to the CHS. Unlike the CHS, most of the information stored by Parole and Probation is used internally by that division to monitor compliance with parole requirements, and secondarily to support the Parole Board in its decision making. The managers of this division have typically not been sworn officers, and many have had advanced degrees in one of the sociological disciplines. This division is supported almost completely by Nevada state general funds.
Highway Patrol Division
The Highway Patrol (HP) Division is the third largest IT user within the NDPS, and serves the same function in Nevada as similarly named organizations in many other states — primarily highway safety law enforcement. However, the HP is one of the largest divisions within the NDPS in terms of overall budget and number of personnel. As with any organization with a large staff, the HP has achieved significant benefit from custom software applications that track personnel development, specifically training and accreditation that in some instances is mandated by state law. The HP is also responsible for a large vehicle fleet, and looks to computerized applications to track maintenance, current location, and users of vehicles. The HP is very functionally oriented and places high priority on the equipment, such as vehicles, used in its day-to-day operations. The organizational structure is a quasi-military hierarchy: officer, sergeant, lieutenant, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
30 Edberg & Kuechler
captain, and ultimately, chief. Although there is an in-division, non-officer administrative support staff, it is a practical necessity that HP management be sworn officers.
Investigation Division
The Investigation Division of NDPS functions as a support arm to local (city and county) law enforcement, and increasingly coordinates statewide initiatives in drug enforcement and national security. Historically most of the IT cost borne by the division — more than 9% of the total NDPS IT budget — has been for telecommunications and the infrastructure necessary to support the large number of information requests to external repositories. Beyond its use in information collection and retrieval, IT is not salient within the division. Similar in organizational structure to the HP, the division is managed by sworn officers.
Other Divisions
The remaining divisions within the NDPS are small, both in overall budget and IT usage. IT provides mainly help desk, networking, and telecommunications support for these organizations. While the functions of some of the divisions such as Fire Marshal, Emergency Management, and Emergency Response are highly IT dependent in some states, IT has relatively limited participation in these services in the Nevada DPS.
Public Safety Technology Division (PSTD)
Prior to 1997, there was no separate division to handle IT for the Public Safety Department. IT personnel were assigned to functional divisions within the department, and each division was responsible for its own programming efforts. Technology infrastructure, such as networking and mainframe operations, were handled by an information technology group in the Administrative Division. Programmers in the functional divisions reported through a matrix structure to both the technology group in the Administrative Division and to the chief of their respective divisions. Creation of the PSTD was approved by the Nevada legislature in 1997 in order to produce better funding source integrity for technology costs and to consolidate information technology costs across the department. There are 33 employees in the PSTD serving 1,252 employees in the Department of Public Safety. The employees in PSTD are broken down as follows: 10 employees in application development, 10 in operations and help/desk, 11 responsible for network and systems management, and two in administration. The mission of the PSTD is to: (1) provide technical support and computer resources to criminal justice and public safety agencies throughout the state of Nevada; and (2) to provide technical support and resources to the divisions within the department to include local area networks, wide area networks, programming, help desk, field support, and technical planning. Since PSTD is a separate division within the department, the chief of the division is a member of the departmental executive decision-making committee. The executive committee meets monthly to appraise performance of the department and make sure that strategic goals are being met. There is no separate committee to evaluate technology decisions or establish overall strategy for the use of technology within the department. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Case Study of IT Chargeback in a Government Agency 31
Most technology strategy is left to the individual divisions; there is no strategic technology plan for the department as a whole. Each division is responsible for purchasing its own PC and workstations directly from its individual budget, but all other technology-related costs are incurred by the PSTD and then charged back to the division. While the PSTD serves all the divisions shown in Figure 1, the four divisions described above bear about 80% of the total costs of the PSTD.
SETTING THE STAGE
Information technology (IT) managers of government agencies struggle to provide effective service for their users, while at the same time minimizing costs and coping with a salary structure for their employees that is frequently much less generous than their industry counterparts. While a standard industry criticism is that government agencies don’t have to cope with the strenuous pace of change inspired by for-profit competition, management priorities in governmental operations can change with each election cycle, resulting in fluctuations in project emphases that are rare in industry. A government IT manager must be able to deliver much with few resources, inadequate and frequently poorly equipped staff, and in an environment of dynamic management priorities. While this situation is much like what their counterparts deal with in industry, government IT managers are also constrained by arcane budgeting and accounting control mechanisms, which are often a result of both federal and state legislation. In addition, IT managers in government agencies are frequently legally obligated to allow public review of even the most minor decisions and expenditures.
What is a Chargeback System?
An important issue for many government agencies is how best to manage and control IT budgeting and actual expenditures. Some government agencies choose to address this through a form of cost allocation or internal billing system, also referred to as a “chargeback system”. A chargeback system is a way to allocate the costs for delivering IT services directly to the group using those services. Two general forms of cost chargebacks have been described in the literature: (1) a soft money approach that summarizes costs in a memo for informational purposes; and (2) a hard money method that allocates costs to the user area and transfers funds to the IT group from the user area (Lin, 1983). The hard money approach can be used to allocate all costs (fixed and variable), just variable costs, or only those costs that the organization seeks to control. Most organizations tend to allocate all costs (Drury, 2000). An important issue in chargeback systems is determining how much of the total cost of IT to allocate to any one group of users and what kind of pricing mechanism to use for the charges. While some costs, such as programming time, can be directly attributed to a group of users or a specific application, other costs, such as network infrastructure and server support, are more problematic to allocate. Much research has been devoted to determining optimal rates for the chargeback of these costs, with suggestions to use such metrics as amount of processor time, number of applications, number of workstations, amount of generated output (bills, checks, reports, etc.), amount of server access, and bandwidth (Bergeron, 1986; Allen, 1987; Sen & Yardley, 1989; Drury, 1997). Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
32 Edberg & Kuechler
A key point in determining whether any of these systems is perceived as “fair” by those being charged is whether the user group has any real control over the costs incurred (Hufnagel & Burnberg, 1994). Many of the costs associated with IT are relatively fixed over the medium to long-term time period, making it difficult for the user to truly control usage (Ross, Vitale, & Beath, 1999). Even if a portion of the user community were able to stop using the server or network infrastructure, the costs of providing that service cannot be immediately cut, so the result would be higher rates for the ongoing users. One group of users might benefit financially, but the other groups would be hurt, given a system that required all IT costs to be allocated by usage. The organization as a whole would continue to have the same overall expenditure for IT in the short term.
What are the Benefits and Drawbacks of Chargeback Systems?
Chargeback systems have been lauded as an excellent way to control costs by making them more visible (Allen, 1987). At the same time, they have been criticized as an unfair, time-consuming waste of resources (Stevens, 1986). At the most elementary level, chargeback provides a way to categorize IT costs and allocate those costs back to the responsible user group. At a more complex level, chargeback is a way to modify the behavior of the consumers and producers of IT. By making the costs visible, the producers of IT could be more accountable to the user community, producing greater communication between IT and users. On the other hand, chargeback could also make the relationship more contentious if users believe that the charges aren’t “fair”. Chargeback, by making costs immediately salient, might prematurely discourage IT usage, possibly leaving unexplored creative methods of facilitating operations through the use of technology. On the other hand, having visible costs allocated might motivate users to carefully scrutinize proposals for automation, thus making better use of all resources (Ross et al., 1999). Table 1 summarizes the primary benefits and drawbacks of using chargeback systems for IT cost allocation.
How Were the Chargeback Cost Categories Determined?
The Nevada Department of Public Safety Information Technology Division began using a hard money chargeback system for full cost allocation at its inception in 1997. The PSTD chief worked for the Department of Public Safety prior to 1997 as the manager of data processing when that function was part of the Administrative Division. He supported the development of a separate division and helped craft the structure of the new division. It was also his responsibility to determine a method of allocating IT costs to the other divisions in order to preserve the integrity of funding sources. He worked with budget analysts from the legislative and executive branches of the state government as well as an outside consulting firm to determine a method that was technically feasible, while also as accurate as possible. The resulting study (referred to as the Maximus Report after the consulting group) divided costs into three categories: •
Development and Programming: User divisions are charged an hourly rate for programmer/analyst time to develop and maintain systems. These costs represent about 20% of the overall IT budget and are the most variable of the costs.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Case Study of IT Chargeback in a Government Agency 33
Table 1. General benefits and drawbacks of chargeback systems Benefits of Chargeback Systems
Encourages constrained use of scarce IT resources.
•
•
Encourages greater communication between IT professionals and the user community.
•
•
Makes IT costs visible and thus more controllable. Places responsibility for IT costs on the user group benefiting from those costs. Forces IT to be more accountable to the user community. Identifies applications with a specific owner in the user community.
•
• • •
• •
Drawbacks of Chargeback Systems
•
• • •
Discourages experimentation with IT solutions to business problems unless the solution is immediately viewed as cost effective. Creates an adversarial relationship between IT professionals and the user community, forcing them to communicate almost exclusively on the cost of IT rather than the results of IT. Requires time-consuming budgeting and collecting of IT costs. Requires estimates of allocations that are frequently perceived as unfair by the user community. Creates accountability for dollars rather than delivered solutions. Discourages the development of integrated, shared applications.
Networking: User divisions are charged per PC and workstation for software and services necessary to support the LANs. These costs are about 30% of the IT budget. System Support: User divisions are charged by the number of input/output accesses they make to the law enforcement system. These charges cover the cost of the law enforcement message switch, which includes mainframe hardware/ software, telecommunication costs, operation analysts, and help desk support. These costs are about 50% of the IT budget and represent primarily fixed costs.
The Maximus Report represented a highly visible and significant amount of cost and effort to the parties involved, and as a result has achieved a quasi-legal status within the Nevada Legislative Budget office. The PSTD chief discovered that the recommendations of the Maximus Report could be revised only through another study of equivalent stature and depth, and that another such study was unlikely in the near term.
How is the Department of Public Safety Funded?
The NDPS is funded from a variety of different sources, and legislatively those sources must be budgeted and used separately. For example, the Highway Division is funded primarily through highway funds. Those funds must be used only for highway projects, as laid out by law through the Nevada Revised Statutes, and any attempt to use those funds to support other divisions in the Department of Public Safety would be illegal. The four major funding sources for the department are highway funds, general state funds, federal grants, and court assessment fees. The only funding source without Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
34 Edberg & Kuechler
legal restrictions is the general state fund, and that fund is watched closely. The legislature participates actively in all decisions regarding general state funds through its biannual line-item budget review process.
What are the Special Issues for NDPS Divisions?
There are three interrelated non-economic issues that are unique to IT management in the arena of governmental public safety, an understanding of which is essential to a full appreciation of this case. These extraordinary issues also provide a context for understanding the reactions of the individual stakeholder departments to the management and budgeting of information technology. They are: (1) sworn vs. non-sworn personnel categories; (2) short term, incident, and function focus for management; and (3) deliberate rotation of management personnel. Sworn personnel are badge- (and frequently gun-) carrying law enforcement officers who are formally charged with upholding the laws of the state. The NDPS culture draws a considerable distinction between sworn and non-sworn personnel; the primary job of NDPS divisions such as HP and Investigation is enforcement, and administrative personnel and functions, including IT, exist solely to support the primary function. Sworn and non-sworn employees fall under different pay scales, with sworn personnel generally receiving higher pay and better benefits. All higher management positions in the NDPS divisions that use sworn personnel are held by sworn personnel who have risen through the ranks of action-oriented law enforcement. The culture is one that emphasizes action over reflection, and rewards short-term, field-level problem solving. An effect of the action orientation that impacts IT is the notable lack of administrative training for sworn department management. Most managers receive only informal on-the-job training as they are promoted to higher positions. Frequently an officer may assume a position for which the prior manager has been quickly moved across the state or to another division, and thus may be available only for a minimal briefing of his predecessor. As a result, though they understand the functional side of their jobs very well, sworn division managers can be less than optimally effective in long-range administrative duties. Exacerbating the difficulties with administrative management is the habit of moving top division managers to other positions after relatively brief tenures. It is not unusual for an NDPS division to have had three senior managers in the course of two years. Such frequent shifts of senior management bring shifts in attitudes toward IT and toward existing IT projects as the details of the case illustrate.
CASE DESCRIPTION
After five years using a chargeback system, the PSTD chief began to wonder if there might not be a better way to account for IT costs. While preparing for a new legislative session and spending weeks gathering the data required to justify his budget, he decided to investigate the “true cost” of a chargeback system for the PSTD and whether there might be a better way to manage the costs of technology within the Department of Public Safety. Some of the “hidden costs” that he suspected were incurred by the chargeback process were: excessive IT planning and budgeting time, excessive user department planning and budgeting time (for IT), and inefficiency costs resulting from the deferral Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Case Study of IT Chargeback in a Government Agency 35
or cancellation of projects due to the mid-budget IT rate fluctuations that are inevitable under current chargeback rules. The PSTD’s chief believed that charging back costs to the divisions was an expensive and time-consuming process. PSTD personnel spent approximately 375 hours a year gathering the information they required to help forecast the needs of the other divisions. While the division chief believed this was an excellent planning mechanism, he felt that much time was spent looking at projects that would not come to fruition because of budgeting constraints. For some divisions, a greater percentage of PTSD time was spent planning than actually producing IT work for the division. He also found it difficult to accurately define a system support allocation to the departments. This was the largest portion of the budget (50%), was a relatively fixed cost, and was required to support all users. Allocating system support became especially problematic when trying to determine which funding source to charge, since not all the system transactions (the billing basis for this cost) from a department were directly related to that single department. The PSTD chief’s last major concern was the difficulty in developing integrated, department-wide applications. One of the primary needs of the department was to have access to information across division lines, yet funds must be committed from different divisions and differing funding sources in those divisions to produce the applications that would generate that information.
What is the Process for Budget Allocation?
All divisions in the Department of Public Safety submit formal budgets to the Nevada State Legislature, which meets bi-annually. Every year, all divisions review budgets and update them as necessary. The PSTD takes advantage of this yearly opportunity to review the objectives of each division and determine whether the use of information technology is strategically aligned with the objectives of the division. PSTD personnel spend about 15-30 hours with each division on this process encompassing these steps: •
• •
•
Interview and evaluation: PSTD personnel interview a pre-determined IT coordinator from each division to identify the mission and objectives for the division and review all outstanding and potential IT projects. PSTD personnel ask specific predefined questions about the division’s use and satisfaction with IT services over the past year. Create project requests: PSTD personnel create a project request form for each project identified in the interview. High level specifications are defined in order to determine the amount of time each project will take. Prioritize projects: PSTD personnel work with the divisional IT coordinator and chief to prioritize the identified projects. Maintenance projects and applications that are a result of legislative mandate are always the first priority. Prioritization is an iterative process — as more information becomes available about the projected budget, the coordinator and PSTD personnel work together to cut projects and focus resources to work within the budgetary constraints. Summarize amounts: PSTD analyzes the projects and determines what level of system support and networking will be necessary to supply the required systems. System support and networking is then allocated to each division using past year
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
36 Edberg & Kuechler
percentages. Estimated programming hours for the prioritized projects are finalized and each division chief is sent the breakdown of costs in the three chargeback categories discussed earlier. These amounts are summarized into a total which is included as a line item in each division’s budget. Each division’s budget (including PSTD’s budget) is submitted to the legislative budget committee, which evaluates the requested amounts and frequently cuts a percentage point or two. As the cuts are made, the PSTD budget is modified to reflect the changes. All IT costs from all other divisions must be accounted for in the PSTD budget. Budget allocation is an iterative process requiring personnel from PSTD and user divisions to work together to refine a budget that will be acceptable to the Nevada legislature. To determine the estimated costs for budgeting, the PSTD division chief uses varying units of measure depending on the required task. Table 2 presents examples of common units used during budget calculations. As an example of accounting done within the budgeting process, Table 3 shows some of the budgeted amounts for the Parole and Probation Division for Fiscal Year 2004. Amounts are categorized by both project and type of use based on the unit of measure specified for the given area. The estimated per unit cost is based on historical information and anticipated yearly costs. Since the PSTD is not a profit center, the actual costs for the division must balance
Table 2. Charged services, billing units, and service category Service
Unit of Measure
Category
Programmer DBA (database administrator) PC/LAN Technician Help Desk Mainframe Usage
Per Hour Per Hour Per Hour Per Workstation Per CPU Minute/month Per 1,000 Per Connection/Month
Development Development Support Support Support
$80/hour $125/hour $50/hour $2/workstation $.25/minute
Support Network
$.05/operation $2.50/connection
Disk I/O VPN (virtual private network)
Estimated Unit Cost
Table 3. Partial budget for FY 2004 for Parole and Probation Project Description Strategic Planning Interviews Sex Offender Maintenance Parole Board and Prison Interface Web Site Development Mainframe Services VPN
New Development
Budget UOM
Estimate $
404 Hours 270 Hours
$32,320 $21,600
Mandated/Maintenance
Budget UOM 30 Hours 202 Hours
Estimate $ $2,400 $16,160
8,000 CPU Minutes 235 connections
$24,000 $7,050
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Case Study of IT Chargeback in a Government Agency 37
to zero at the end of the year with all costs allocated to the user divisions. As an example of how this budgeting policy can produce fluctuating charges, consider the following representative illustration: 4,888 programmer hours were budgeted in total for Parole and Probation for FY 2001, at a charge estimated at the beginning of the year at $70 per hour. Assume all of the hours for Parole and Probation were expended; however, several other large IT user divisions cut back on projects during the year reducing total PSTD chargeable hours by 10%. The total salary load for the PSTD during the fiscal year remained constant, so charges per hour had to increase by 10% to $77 per hour because PSTD isn’t able to quickly fire or hire programmers to compensate for changes in other division’s use of programmer resources. Thus at year end, Probation and Parole faced an unanticipated increase in over budget programming charges of 4,888 hours * $7/hour = $34,216. After the budget has been approved by the legislature, division chiefs use the finalized document to guide their decisions and financial expenditures throughout the year. Budgets and actual expenditures from PSTD reports are reviewed quarterly by both legislative and governor’s office budget analysts to guarantee that legislative decisions are upheld, funding source integrity is maintained, and the governor’s strategic objectives are supported. Any deviation from the original budget over $1,000 must be approved by both legislative and the governor’s budgeting committees.
What is the Process for Reviewing IT Development?
The development, implementation, and project review processes followed by PSTD and its consumer divisions follows a conventional structured systems development life cycle (Marakas, 2001). Every division chooses an “IT liaison”, who serves both as the requirements designator for custom development projects and IT infrastructure coordinator. As projects progress, the division IT liaisons participate most heavily in reviews, testing, and implementation. The PSTD chief found that some of the tension introduced by the chargeback system was evident in the review process because the constant visibility of IT costs made divisions extremely sensitive to billings for IT analyst time, which show up in monthly statements from IT. Relations with the PSTD had reached the point where the division chiefs attempted to minimize meetings with IT. For those meetings that were agreed to be necessary, division chiefs frequently explicitly asked that the number of IT personnel in attendance be minimized. “How many analysts does it take to figure this out?” one of them asked. In response, the PSTD chief tried to foster understanding of the specialized nature of IT expertise, indicating, for example, that a project may require a database specialist, a senior systems architect, and the primary programmer for the project to attend a meeting for the most effective understanding of requirements. For the most part this effort had been unsuccessful. The chargeback system became the scapegoat for this contention, since division chiefs did understand the mandate to account for all IT costs, and most were to some degree sympathetic to PSTD’s need to satisfy the mandate. The Technical Affairs Committee (TAC) is a mechanism that evolved within the NDPS for the department-wide tracking of projects, infrastructure developments, and knowledge sharing between IT and its user divisions. TAC was created to serve as an overall steering committee for planning and coordinating information systems throughout the department. The Committee consisted of the chief of PSTD, selected IT personnel, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
38 Edberg & Kuechler
the IT liaisons from all divisions, and other divisional personnel as required. TAC meets once a month. The PSTD chief recalled that when the Director of the NDPS first mandated the TAC, the meetings were attended by the chiefs of the divisions. However, within a span of a few months, the chiefs delegated TAC matters to their seconds-in-command or their IT liaisons.
What are Stakeholder Reactions to PSTD and Chargeback?
Attitudes toward information technology in general, and toward the PSTD specifically, vary considerably between divisions. This section provides details on the approaches to IT and its management from the four largest user divisions of IT within the DPS. It concludes with a summarization of NDPS division managers’ attitudes toward the chargeback system itself, notably one of the areas of widespread agreement on IT between divisions.
Parole and Probation
The Division of Parole and Probation was headed by a professional penologist who had a master’s degree in sociology.1 Parole and Probation is the largest IT user division not headed by a sworn officer. The chief had been head of the division for six years and felt the performance of the PSTD could be improved, primarily through better justification of costs. His division was being charged for a full-time programmer by PSTD, and the chief was unsure whether or not higher value could be obtained by the use of commercial offthe-shelf programs or outsourcing development. Although he felt the performance of the systems developed for him was good, he wanted a third party (auditor) to investigate costs relative to alternatives. Through national conferences on parole and probation, he had become aware that equivalent divisions in other states were successfully using commercial software packages. His suspicion that in-house development was inefficient predated the establishment of the PSTD; he recalled “several bad experiences” with costly projects when software development was handled by a division of Administrative Services. He disliked the budgeting process, as he understood it, because it caused radical shifting of funds at year-end, as all accounts across the department and between divisions were forced to zero. The prior year he had been surprised after the end of the fiscal year by a $31,000 over-budget amount in IT charges.
Criminal History Repository
The CHS is the largest user of IT within the NDPS and its chief voiced the most criticism of the PSTD. The CHS chief was a sworn officer who had been transferred to the position only nine months prior to the PSTD chief’s chargeback cost investigation; his prior assignments had been primarily field investigations and he expressed some discomfort over this desk assignment. However he had reviewed all prior in-progress IT projects in considerable depth and understood their intended functions quite well. Within the Records and Identification Bureau, there is a perpetual backlog of information to be entered into the systems, and automation was considered a key component in relieving the backlog. The CHS chief had attended several conferences, which had made him aware of the potential benefit of IT to his division. However, he felt that the amount Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Case Study of IT Chargeback in a Government Agency 39
of time he spent performing oversight on IT projects, approximately 25%, was excessive. He expressed the opinion that as the largest user by dollars of IT services, his projects deserved a higher priority than they received.2
Highway Patrol
At the time the PSTD chief began the chargeback study, the HP chief was the third sworn officer to hold that post in a span of two years. Like most division chiefs, he was dissatisfied with the performance of the PSTD, but also like most, he attributed the perceived ineffectiveness to the effects of the chargeback system, which he understood made it extremely difficult for the PSTD to staff effectively. The large size of HP made it possible for the division to initiate in-division IT projects, implemented by sworn officers who had knowledge of computer systems and an interest in developing information systems. This created a significant jurisdictional problem and was an additional source of contention between the PSTD and the HP. Just as in the many documented industry cases of software projects developed outside the formal IT structure (Guimaraes, 1997), the HP internal systems raised the following issues: • • • •
Responsibility: Who would assume the support and long-term maintenance of the system once developed? Quality: Would the system be developed and documented to PSTD standards? Interactions: Would the system integrate correctly with the existing network infrastructure? Would the system create security problems for the department? Resource allocation: If the resources given to the projects had been allocated to PSTD, could the systems have been developed more efficiently and within departmental IT specifications?
PSTD brought the independent systems and the potential for significant problems to the attention of NDPS senior management. HP managers contended that the projects were vital and could be completed more quickly in-house. Following a considerable number of memoranda and meetings, the HP chief decided to remove the sworn officers from software systems development and return them to field positions. Negotiations are now underway between the HP, the state budget office, and the PSTD to have one or more positions in the PSTD dedicated to and paid for by the HP, while supervised by the PSTD.
Investigations
The chief of the Investigations Division was a sworn officer who delegated the majority of the responsibility for IT to one of his subordinates, also a sworn officer. The chief was aware that IT was a key component for effectively managing his division, and mentioned specifically the increasing number of mandated reports that were very costly to compile manually, notably, arrests per month, and details on new cases per month and per year. The chief hoped to see more casework entered online, and wanted personnel accreditations and training tracked by computer. The Investigations lieutenant to whom budget and IT details had been assigned by the chief was, like personnel in several other divisions, apologetic about his “desk job”. “I used to work for a living”, he commented, alluding to prior field positions. When asked about a long-range IT plan for his division, the lieutenant indicated that the high visibility Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
40 Edberg & Kuechler
inter-division IT project called IRIS served in lieu of a formal plan, since, as he understood the project, virtually all the divisions’ information would be entered into IRIS and the system would in turn produce all necessary reporting, both internal management reports and mandated inter-departmental reports. Like many in the NDPS, the lieutenant felt the cumbersome IT budgeting process (the chargeback process) was partially responsible for shortcomings in PSTD performance. The burden it created was seen to lie within the PSTD however, as Investigations spent very little of its resources in IT budgeting. “The PSTD division chief tells us what to spend” was one comment. Later this was clarified as meaning that after PSTD personnel talked with the Investigations IT liaison concerning IT needs at the start of the budget process, PSTD would return with a cost figure for the division for the indicated requirements. The Investigations Division did not have the expertise to question the cost, and so accepted it without comment.
Common User Reactions to the Chargeback Process
Despite notable differences in the perceptions of the PSTD and in the utility of IT for their functional units, the user divisions were consistent in their reactions to the current IT budgeting and chargeback process: • •
•
•
The process was cheap. Budgeting and chargeback allocations were very inexpensive to the user departments. Most of the onus for developing the budget and performing actual chargeback allocations was on the PSTD. Time spent budgeting yearly was wasted. The user divisions believed that most of the projects that the IT coordinators identified as important will be cut back because of limited IT resources, so it was a waste of time for them to perform extensive yearly planning. “I think they (PSTD) could use their time more wisely, with fewer meetings and fewer people in meetings” was a frequent comment from the user divisions. There was no user control. The user divisions felt they had control over only a small amount of the actual expenditures (development and programming time), so they didn’t oversee or manage the remaining IT costs closely. “PSTD always gets paid first, and they get paid what they tell us to pay, so I don’t see it as any kind of expense I can control” was a comment from one division chief. IT prices and services were abstract. None of the IT coordinators or division chiefs felt capable of evaluating the effectiveness of IT services. The user stakeholders consistently said that they were uncomfortable trying to oversee a process and product of which they had so little knowledge. “They tell me it will take five years and cost $1 million. How do I know if that is a fair price? It might be, but I have no way of checking it out without having it cost me even more money just to check it out” was an observation from a division chief.
The user divisions’ stakeholders uniformly believed it would be better to allocate a certain dollar amount to PSTD and then make the division accountable for its performance, rather than its budgeted dollars. A common reaction was that everyone wanted more technology capabilities and wanted more information available for decision making, but didn’t want to have to actually see the price tag for that technology and information. They certainly didn’t want those dollars transferred from their budgets to Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Case Study of IT Chargeback in a Government Agency 41
the budget of PSTD. Each believed that it seemed extraordinarily expensive to receive those services from PSTD, but they weren’t aware of market prices for similar services and didn’t have ideas about how to comparison shop for information technology.
Is Chargeback an Effective IT Management Tool for NDPS?
Using a chargeback cost allocation scheme has benefits and drawbacks for the Department of Public Safety specifically related to the objectives for initially installing the system. Table 4 describes the primary objectives for the system and then summarizes the benefits and drawbacks currently experienced within the department.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
At this time both the PSTD chief and the PSTD user divisions agree that IT services Table 4. Benefits and drawbacks of chargeback system for NDPS Objective Protect funding source integrity Provide greater visibility of IT costs
Benefit of Chargeback System
• •
System relates costs back to appropriate accounts. Division chiefs are aware of the funding required to support IT.
Drawback of Chargeback System
• • •
Enhance level of communication between PSTD and other divisions
•
Create better accountability for IT projects
•
•
Forces PSTD to communicate with other divisions about resources required to complete projects. A well-documented planning process is in place for all IT projects. Each project is accounted for completely down to the last penny.
•
•
• Enhance the ability to plan IT projects
•
System forces PSTD to plan projects for an annual cycle.
• • •
Increase the efficiency of PSTD
•
IT is held accountable for the funds it spends.
•
Requires estimates of resources that may not be completely accurate. System does not make individual costs, such as help desk and troubleshooting, more visible. Most costs are perceived as overhead, or fixed. Division chiefs did not know whether they were getting their money’s worth, so providing more information about the financial outlay was not helping their understanding. Other metrics, such as project duration and value to the organization, are less visible due to the high stress on chargeback. Actual success/failure of projects is less visible. Projects are planned only within division; there is no cross-division planning encouraged by the system. System has created a short-term mentality for planning. Cost accountability replaced the use of a departmental IT steering committee. PSTD must spend hours accounting for funds.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
42 Edberg & Kuechler
need to be improved. Table 5 lists the challenges to PSTD effectiveness, and for each, includes the impediments and constraints to overcoming those challenges. At a higher level, the major challenges facing the PSTD today are implementing the high-level policy changes that will enable these issues to be successfully addressed. Some of the challenges from Table 5 merit a final discussion. Beyond general dissatisfaction with PSTD, the most critical issue faced by the department is the inability to integrate data and provide salient information for departmental administration. The deputy director of the NDPS expressed the view that, “Examining data across the division is the single most important thing we can’t do”. User divisions were cognizant of the scarce resources provided to IT, and believe this to be a key cause of service problems. One technique that the PSDT chief considered to eliminate radical end-of-year rate adjustments was to keep billings at a constant rate for each year, correcting over- or underestimates by imposing higher or lower fixed rates the following year. Unfortunately the state legislature currently forbids accounts that do not balance to zero at the end of a year, unless each such account is specifically authorized by the legislature. This makes
Table 5. Challenges to IT effectiveness within the NDPS Challenges to IT Effectiveness
Difficult to implement department-wide projects Integrated systems are necessary for management reporting
Abrupt cancellation of projects
Functional, eventdriven managerial culture Frequent turnover of management at all levels Funding source integrity must be maintained
Issues Underlying the Challenges • Most divisions are funded by distinct revenue sources related to their function, and commingling of funds is prohibited by law. • Difficult to estimate the percentage use of a system that isn’t even developed, must less installed and used. • In addition to the impediments associated with department-wide projects, this challenge faces problems in prioritizing projects for integrated systems. Each division chief has to buy into the idea of creating an integrated system and provide funding resources. • Requires long-range planning. • Some divisions are dependent on fluctuating fees and grants, and cannot predict the availability of funds. • PSTD cannot give a completely accurate dollar amount for users until the actual usage rates are determined. Since usage rates fluctuate, but the dollars are relatively fixed, different divisions may become responsible for a larger percentage of the costs than originally budgeted. Thus, users may be surprised at the expense and cancel any projects they possibly can to save funds. • Resources for managerial training are not available. • Sworn officers are usually considered better candidates for managing other sworn officers. • Short-term planning is reinforced by the biannual budget cycle. • Management turnover is considered a way to keep sworn officers “fresh” in the culture of the department. • Nevada legislature is conservative and unwilling to lightly pursue change. • Nevada legislature analysts require a costly study to modify the current chargeback system.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Case Study of IT Chargeback in a Government Agency 43
a practice that could be helpful and which — extremely common in industry — is very difficult to implement. In summary, the perceived immediate causes of PSTD ineffectiveness appeared to the PSTD chief to be immutable constraints in the state government environment: funding for the NDPS divisions is unlikely to change, and the need to account for IT costs by fund is required by law. Further, the basic culture of the NDPS (or any state public safety department) is deeply rooted in the primary functions of the department. Thus, the chief must find methods to improve IT service delivery while operating under these constraints — a significant challenge indeed!
REFERENCES
Allen, B. (1987, January-February). Make information services pay its way. Harvard Business Review, 57-63. Bergeron, F. (1986). Factors influencing the use of DP chargeback information. MIS Quarterly, 10(3), 225-237. Drury, D. H. (1997). Chargeback systems in client/server environments. Information and Management, 32(4), 177-186. Guimaraes, T. (1997). The support and management of user computing in the 1990s. International Journal of Technology Management, 14(6), 766-788. Hufnagel, E., & Birnberg, J. (1994). Perceived chargeback system fairness: A laboratory experiment. Accounting, Management and Information Technology, 4(1), 1-22. Marakas, G. M. (2001). Systems analysis and design: An active approach. Englewood Cliffs, NJ: Prentice-Hall. Ross, J. W., Vitale, M. R., & Beath, C. M. (1999). The untapped potential of IT chargeback. MIS Quarterly, 23(2), 215-237. Sen, T., & Yardley, J. A. (1989). Are chargeback systems effective? An information processing study. Journal of Information Systems, 92-103. Stevens, D. F. (1986, July). When is chargeback counterproductive? EDP Performance Review, 1-6.
FURTHER READING
Drury, D. H. (2000). Assessment of chargeback systems in IT management. Infor, 38(3), 293-313. Holstrom, B., & Milgrom, P. (1994, September). The firm as an incentive system. The American Economic Review, 972-991. Nolan, R. (1977). Effects of chargeout on user/manager attitudes. Communications of the ACM, 20(3), 177-185. Olson, M., & Ives, B. (1992, June). Chargeback systems and user involvement in information systems — An empirical investigation. MIS Quarterly, 47-60.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
44 Edberg & Kuechler
ENDNOTES
1
2
As of this writing the Parole and Probation chief had taken another position within the state. As of this writing the head of CHS had taken another job, outside the state administration.
Dana Edberg is an associate professor of information systems and the chair of the Department of Accounting and Information Systems at the University of Nevada, Reno (USA). She has a PhD in management information systems from Claremont Graduate University. Prior to joining UNR, she performed software engineering project management in government and industry. The major focus of her research is on the longterm relationship between users and information systems developers, but she has also published articles discussing the virtual society and global information systems. She has published in the Journal of Management Information Systems, Information Society, Information Systems Management, Government Finance, and other international conferences and journals. William L. Kuechler, Jr., is an assistant professor of information systems at the University of Nevada, Reno (USA). He holds a BS in electrical engineering from Drexel University and a PhD in computer information systems from Georgia State University. His 20-year career in business software systems development provides insights to his research projects, which include studies of inter-organizational workflow and coordination, Web-based supply chain integration, and the organizational effects of inter-organizational systems. He has published in IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Communications, Decision Support Systems, Information Systems Management, Journal of Electronic Commerce Research, and other international conferences and journals.
This case was previously published in the Annals of Cases on Information Technology, Volume 6/2004, pp. 522-539, © 2004.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Politics of Information Management 45
Chapter III
The Politics of Information Management Lisa Petrides Teachers College, Columbia University, USA Sharon Khanuja-Dhall Teachers College, Columbia University, USA Pablo Reguerin Teachers College, Columbia University, USA
INTRODUCTION
Developing, sharing, and working with information in today’s environment is not an easy task. With today’s technological advancements, the management of information appears to be deceivingly easier. However, building and maintaining an infrastructure for information management involves complex issues, such as group consensus, access and privileges, well-defined duties, and power redistribution. Furthermore, higher education institutions are continuously faced with the need to balance the politics of information sharing across departments, whether the administration operates in a centralized or decentralized manner. The need to develop, share, and manage information in a more effective and efficient manner has been proven to require a challenging shift in the norms and behavior of higher education institutions as well. This shift does not have as much to do with the actual use of technology as it does with the cultural environment of the institution. Davenport (1997) notes: Information cultures determine how much those involved value information, share it across organizational boundaries, disclose it internally and externally, and capitalize on it. (p. 35) Depending on the history, people, and cultural environment, each organization faces its own dilemmas around the task of compiling and sharing information. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
46
Petrides, Khanuja-Dhall, & Reguerin
This case details one institution’s attempts, at a departmental level, to develop an information system for planning and decision making. It looks at the department’s effort to manage and track students and to design a management tool that would help departmental faculty to function more effectively. It examines the challenges faced in managing information and the behaviors that drive new information management processes with the increased use of technology.
• • •
CASE QUESTIONS
Whose responsibility is it to lead information systems integration in higher education? Who will or will not benefit from this? How do certain behaviors and group norms help or hinder the effective design and implementation of information systems? How can decentralized organizations negotiate and balance the competing demands and goals of the institution?
CASE NARRATIVE Background
Midwestern University (MU) has an enrollment of approximately 15,000 students. Since it was founded, the mission of MU has been to provide world-class leadership in teaching and research. Within MU there are 15 academic departments and several administrative units. University administration had historically taken a very centralized approach to program enrollment, recruitment, financial aid, and general administration of student-related matters. However, more recently, top-level administration has encouraged individual departments to take more local control of their planning, ranging from student administration to budget setting. The push for local or departmental control has not been accompanied by the requisite development of reliable information systems necessary for both short- and long-term planning. This decentralized approach has placed departments at a distinct disadvantage due to increasing levels of accountability at the department level. Historically, information such as student enrollments and financial aid allocation flowed downward from central administration offices to the departmental level. The upward flow of information consisted of a set of checks and balances associated with departmental graduation requirements. In addition, data that were specific to the department level did not flow upward (e.g., faculty advising lists and student progress reports). Administrative divisions were centrally managed with multiple databases tracking data in functional units. For example, enrollment data were maintained and controlled by admissions, but the graduate studies office controlled doctoral student data. Many of these systems were run with old and outdated software, and the university struggled with the lack of a coordinated information system that managed all data collected throughout the university. This resulted in issues of data integrity, redundancy, and accuracy, with a low level of trust concerning the interpretation of data.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Politics of Information Management 47
Enrollment data were maintained at the university level. These data were available to assist the department in knowing how many students were enrolled during a particular semester. However, it could take three to four weeks to obtain data from the central student information system, and field definitions were seldom defined. Additionally, because students were not centrally tracked through the various stages of doctoral completion, it was difficult if not impossible to ascertain the types of classes, services, and faculty commitment that students required with any degree of certainty. Departments relied on anecdotal information to conduct planning, and this became a standard and acceptable practice by default. Additionally, many faculty suspected that there were dozens of students who slipped through the cracks in the process somewhere along the line and might have been precipitously close to dropping out. There was also a high level of dissatisfaction among MU students with regard to information management. Students were frustrated with the number of repetitive steps and processes involved in their educational experience. For example, students needed to register for classes at the registrar’s office. However, depending on the class students wanted to register for, they may have needed to receive departmental signatures prior to registration and then go to an entirely different office to make tuition payments. Because of the amount of time spent in completing these tasks, students’ frustration level only increased when the data across these areas could not be shared. The Arts and Humanities (A&H) Department has approximately 200 doctoral graduate students, 200 graduate master’s students, and 300 undergraduates enrolled. Unlike the master’s and undergraduate students, who have structured two- and four-year programs, doctoral students went through several different stages of enrollment; first as graduate students enrolled in classes, then as doctoral candidates once they passed comprehensive exams, followed by a period of time during which they took independent dissertation-related methods courses and dissertation writing seminars. This multi-stage process was very complicated to track and the department had been unable to determine with much accuracy at what stage in the matriculation process their 200-plus doctoral students were at any given time. This had many implications for departmental planning. The opportunities and challenges presented by a more decentralized structure of decision making needed to be supported by reliable information. In conjunction with this challenge, the department began to conduct long-term planning for doctoral course offerings and faculty dissertation loads. This affected planning for core courses, research seminars, and dissertation writing workshops. Additionally, there were implications for faculty workload since work with doctoral students could be a very time-consuming process at various stages of their degree. In fact, the proposal and final writing stage for doctoral students working on their dissertations often required a large investment of faculty time, mainly consisting of reading draft chapters and supplying timely feedback.
The Politics of Information Sharing
With the university’s push to a decentralized model of operation, departmental accountability and ownership of doctoral student data were becoming a priority. The need for the department to track and assess doctoral student status was crucial to both the doctoral students’ and departments’ success. Members of the department decided Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
48
Petrides, Khanuja-Dhall, & Reguerin
that they needed to do something about the situation. They agreed to tackle their first goal — how to improve access to student information. In an attempt to address this issue effectively, a needs assessment was conducted. This consisted of determining what type of information was required about doctoral students in order to do more short- and long-term planning. During the planning process, the department faculty realized they did not even know how many doctoral students had continuous enrollment over the past two semesters, let alone how many students were projected to graduate that year. There were larger issues of completion and attrition that faculty wondered about but seemed afraid to find out. Simple questions were unable to be answered, such as: how long do doctoral students take to complete the program, how many students have completed their coursework but not yet taken their comprehensive exams, how many students need to take a dissertation writing seminar the next semester, and how much financial aid support do students need to graduate. Not only were there student-related questions without answers, but there were also issues of faculty workload. There were 25 full-time faculty members in the A&H Department. Seven of them were untenured but on the tenure track. It had been brought to the dean’s attention in promotion and tenure reviews that the junior faculty might have a disproportionate amount of the doctoral student load. However, when asked, the department chair was only able to answer the question based on general estimates and hearsay. There were no reliable data regarding faculty workload issues. This lack of information regarding doctoral students and faculty workload only made stronger the department’s chair request that the information management of the department be improved upon. The departmental culture was one in which information was heavily protected. Traditionally, the sharing of information had been the source of political disputes. Faculty neither felt that they gained anything by sharing information about doctoral workload, nor did they see the need to. In this case, senior faculty members typically had a lighter doctoral student workload than junior members and wanted to avoid workload reallocation. However, junior faculty who had a heavy workload struggled to obtain and share doctoral student information with other faculty. In this case, these issues only added to the closed nature of sharing information in the department, since information sharing behavior was neither recognized nor rewarded. Whether we like it or not, information politics involves competing interests, dissension, petty squabbles over scare resources. (Davenport, 1997, p. 78)
A First Step
Two years earlier, the department chair had instructed his administrative assistant to begin to collect and maintain departmental doctoral student data using a Lotus spreadsheet. These data were kept independently of the university-wide information systems. Numerous challenges associated with creating, sharing, and updating the spreadsheet files were faced. The historical operation of the department was heavily reliant on another office’s data, and faculty’s self-management of their doctoral students led to information that was not readily available at the departmental level. Furthermore, it was very difficult for the administrative assistant to consolidate the information from the disparate systems and faculty members. Specifically, the data that were to be Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Politics of Information Management 49
compiled included information such as: the number of credits for students currently enrolled, their year in the program, their comprehensive exam completion status, their faculty member adviser, and the amount of time students had left to finish their coursework. As indicated, this information was not centrally located and each system varied in type and form. Within the department, some data were in hard copy only, filed in a file cabinet or in handwritten notebooks that faculty used for personal tracking of their students. Some of the information was not even documented or available in an accessible system. With so many varying types of systems and the data being scattered throughout the department, the effort to consolidate the information into a spreadsheet was difficult. In order to create a workable tool, the scope of the data collection effort was limited only to departmental doctoral student information. Once the information was collected and consolidated into the spreadsheet, reports were summarily disregarded by faculty. When looking closely at why the spreadsheet failed, several items were identified. For example, there was the limitation that spreadsheets impose on data — data must be depicted in columns and rows, and the ability to crosscut data is limited. For example, a header row contained student year, faculty adviser, and the number of years that student had been enrolled. The spreadsheet had 50 columns across and more than 200 rows down. Because a spreadsheet cannot be queried, the only way to find or organize the information was by sorting the entire spreadsheet. This became cumbersome because, if a multiple column sort was conducted, Lotus would sort one column at a time, independent of the other columns, with the end result being a sorted list of all students not just the category desired. The administrative assistant tried to counteract this by taking a portion of the complete spreadsheet and cutting and pasting it into another file. This resulted in multiple spreadsheets with information that needed to be updated in eight or nine different files. Even if the person responsible for doing this kept track of the updates, it would be extremely inefficient, redundant, and prone to error. The inability to develop special views of the data and custom reports was a limiting factor with the spreadsheet. This querying limitation only increased the lack of support and use for reliable information. A second, and more obvious challenge, was the administrative assistant’s lack of sophistication and training around the software itself.
The Web-Based Relational Database Project
Despite the initial failure, the chair of the department asked two technologically minded faculty members, both untenured, to write a proposal for building a relational Web-based database that would consolidate and centralize data from several different areas of the university, including other administrative offices outside the immediate department. They submitted a proposal to build a Web-based, password-protected database that would be accessible to all faculty. The proposed system would be easy to use; they estimated that it would take approximately two hours to train a computerknowledgeable individual to use the system. The data would reside in one file, and reports could be created automatically. They would provide two-hour training for the administrative assistant, a two-page list of instructions of how to import data and produce reports, and a one-page list of instructions for faculty members on how to access and use the database via the Web. They estimated that it would take them eight months to Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
50
Petrides, Khanuja-Dhall, & Reguerin
complete the project. The department chair gave them $7,000 the next week to begin their work. The design team was led by the two faculty members. An outside consultant who specialized in database design was hired to join the team. Because the Web-based technology was somewhat new to the department, a consultant specializing in Web development was also brought on to help create the proposal and pilot system. In creating a proposal that would define the scope of the project, the resources required, and the required information for the database, the two faculty members divided the project into three main phases — planning, design, and implementation. This provided them with a framework that gave measurable and clear checkpoints that were dependent on departmental faculty approval. The planning stage first involved a requirement study that consisted of identifying a comprehensive list of the department’s information needs. This also required looking at external data requirements and the systems that data would come from. The additional data that would be gathered from across the university included data from the Student Information System (SIS) managed by the Registrar’s Office, the Doctoral Student Database (DSD) managed by the Graduate Studies Office, and the Student Payment System (SPS) managed by the Student Accounting Office. Student data for each of these systems were to be consolidated into the A&H relational Web-based database, along with additional data that were collected at the departmental level only (e.g., faculty advisers and dissertation chairs). The two faculty leaders conducted interviews with each of the faculty and prioritized requests from the departmental members and the chair. The need for new data that had not been collected previously by any office was also identified. The compilation of all the requested data came from approximately 20 different subsystems both manual and electronic. As described earlier, these systems ranged from word processing to handwritten notebooks. The next phase required designing the relationships between the data elements and tables. The database consultant helped to incorporate a database design that was able to depict the relationships between each of the different data tables with relative ease. This provided an initial understanding of system complexity by focusing on the relationships between data, data types, and source. This exercise was essential in proactively understanding how the new system would be queried and what information would be collected in the new system. Diagram 1 shows a relational schematic of how a few tables in the database would be linked by student social security number, a primary and unique key across each table. The diagram illustrates the relationship between the new tables to be created and the source of the data. The design team determined that approximately 32 tables with 500 data elements would be required in the new Web-based system. This included information such as: demographics, address, first enrolled, last attended, dissertation chair, whether students attended school full time or part time, and when their doctoral candidacy expired. The issue of data maintenance was raised as a main concern in the design phase, and the team recommended a system manager to keep the data integrity at an optimal level. The team selected software tools based on the data complexity and faculty interviews. Having a clear understanding of the faculty requirements concerning doctoral student information, explicitly outlining the data relationships, and assessing the current mix of Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Politics of Information Management 51
Diagram 1. Sample relational table Student Table Social Security Num ber First Name Last Name Age Sex Home Street Address Home City Home State Full Time/Part Time Home Country Source: SIS
Department Faculty Table Social Security Number Last Name Of Faculty Member First Name Of Faculty M ember Dissertation Review Date Source: Lotus Spreadsheet
Doctoral Status Table Social Security Number Enrollment Date Comprehension Exam Passed Candidacy Expiration Date Source: DSD
systems and interfaces, the team was able to confidently select effective software tools for implementation. The main goal surrounding the selection process was to identify a user-friendly and intuitive front-end that would provide faculty with ease and functionality for sharing and accessing data. The last phase, implementation, consisted of running a pilot with faculty, training the faculty, and receiving sign-off approval from the chair to operationalize the entire system. In piloting the system the two faculty members demonstrated the capability of the new system at a faculty meeting and also provided one-on-one demos. Based on these demos, faculty members requested even more features and functionality from the system. Not only did the team implement the requested functionality, but they also incorporated an automated feedback form that would allow new feature requests to be delivered to the core development team on an ongoing basis. For example, if a faculty member identified a new feature she or he wanted, the faculty member could complete an online form that would forward the request to the right development team member. In addition, a response could then be provided back to the faculty member indicating when and if the proposed feature would be integrated. Up to this point, the core team thought the support for the system was mostly positive and energetic. As faculty members started to use the pilot, problems began to surface. In order for the information system to become embedded as an integral part of the department’s planning and decision-making processes, faculty needed to verify data and recommend reports for use. However, faculty started to resist requests for updated information, such as confirmation of their status on all of their doctoral committees. Because these data were not centrally maintained, the current information was anecdotal and was sometimes passed on incorrectly by word of mouth. When faculty were pressed to provide a list of their doctoral advisees, they either did not have the time or could not figure out how to look at the existing list online. Some faculty went so far as to have their secretaries print out dozens of pages of student information so that they could check it manually. When there were finally enough data in the pilot to begin to produce reports with calculations from the relational database, such as faculty workload, enthusiasm for the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
52
Petrides, Khanuja-Dhall, & Reguerin
project started to fade and issues of information sharing, politics, and resistance to change became visible. Additionally, the administrative assistant quit during this time leaving no trained replacement. At this point, support and participation levels were quite low. When faculty members complained that they still did not understand how to use the Web-based system, additional one-on-one training was offered. Some faculty thought the system was too complicated and reverted back to their old paper systems of tracking, while others simply did not participate, saying that the system was cumbersome. Unlike the planning and design phases, faculty members began to show non-supportive and unresponsive behavior to the pilot system. In fact, the few faculty members who did use the system were still collecting and managing their individual information and only checking the system as a secondary source, even though this system was easily accessible from their homes or offices and globally available on the Web. Information that was once individually owned and managed became visible to the entire department. Historically, faculty were not used to working together collectively to solve department-wide problems. Furthermore, as the two untenured junior faculty members were the main drivers behind the proposal, senior faculty were most vocal in their resistance to the system, which meant that a full-scale implementation looked doubtful. As Green (1999) indicates, this lack of support is critical in technology and higher education integration: […] failing to recognize and promote faculty who invest in technology in their scholarly and instructional activities sends a chilling message about the real departmental and institutional commitment to the integration of technology in instruction and scholarship. (p. 8)
ANALYSIS
What would a successful implementation of the Web-based system have looked like? Would it have changed the department’s attitudes, changed the behaviors around information sharing, or improved the overall experience for doctoral students? These questions have gone unanswered because of the complex interrelations of technology, people, and information-related change. Although the department chair and faculty members initially decided to move forward in improving doctoral student information availability, two very different attempts, both resulted in failure. The spreadsheet and the Web-based database were functionally different, yet both failed around similar issues: group buy-in, information ownership, data collection, and an inability to change the working norms and culture. In both cases, garnering initial buy-in did not seem to be difficult. The faculty and the department chair wanted to increase access to student information. Everyone agreed that the use of technology could provide the department an advantage in planning and meeting goals and objectives. However, when faculty members were asked for information or asked to change their working patterns, few cooperated. There was a discrepancy between what agreement or buy-in meant, specifically between what was said and what was practiced. The faculty agreed, in theory, that the use of technology was needed to increase access to student information. However, it could be argued that the buy-in was Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Politics of Information Management 53
not present when the ideas required change in work and behavior patterns. Furthermore, the responsibility for design and collection was handed-off to individuals in the group with junior status. Even though they had more technical expertise, their junior status may have dissuaded senior faculty from embracing the project, and in fact, the two junior faculty members were neither recognized nor rewarded for their efforts. Morgan (1986) notes: When a high-status group interacts with a low-status group, or when groups with very different occupational attitudes are placed in a relation of dependence, organizations can become plagued by a kind of subculture conflict. (p. 137) In the design process, the faculty were challenged by setting standards and specifying criteria in order to define data fields. This process worried some faculty. For example, the ability to measure doctoral student workloads may have raised a discussion around redistributing work. The image that some faculty portrayed of being overloaded could have been proved or disproved. Obviously, some faculty might have benefited and others might have faced unease and additional work. The data collection and information ownership activities were difficult because of the underlying norms and behavior of the department. Different norms, beliefs, and attitudes to time, efficiency, or service can combine to create all kinds of contradictions and dysfunctions. These can be extremely difficult to tackle in a rational manner because they are intertwined with all kinds of deep-seated personal issues that in effect define the human beings involved. (Morgan, 1986, p. 137) The complete list of recommendations and requirements for implementing the system across the entire department was never fully realized. For example, a “system owner” who had skills in information management was recommended. However, the chair and fellow faculty members did not think that such a person was necessary or needed. An appreciation of the technological skills required to maintain the system was not present. In an attempt to leverage other technologically driven functions of the university, the faculty team tried to involve the director of information systems (IS) at the university to help drive the implementation as a pilot project to gain support. Despite the system being well received by the director, the IS Department was unable to support the effort because of costs and other in-house responsibilities. Furthermore, the chair and faculty members wanted to hand off the maintenance of the system to an administrative assistant even though the skill sets required and recommended by the core team did not match. Secondly, despite the $7,000 grant allocated to this initial effort during the proposal stage, the estimated cost to roll out the system across the department was closer to $150,000. The faculty team came up with these estimates based on the work done in the design and implementation phase. Specifically, the technical development work, the expected database size and volume, and the maintenance of the new Web-based technologies drove up the team’s estimate. As the continuation of the project became extremely expensive, the department chair did not approve the roll-out plan. One of the most challenging issues was the working norms and culture of the department concerning issues of data ownership and sharing. This department was Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
54
Petrides, Khanuja-Dhall, & Reguerin
resistant to technology in practice and was not open to sharing information, let alone integrating their respective information processes. The culture at the department level was not one that was open to sharing information. This was especially apparent when faculty did not support the need to share workload profiles nor discuss doctoral information with other faculty members. This attests to the importance of not moving forward until there is evidence of real commitment from other stakeholders. Lastly, and perhaps most importantly, the data and information availability that the system provided were not culturally aligned to the individualistic, competitive, and nonsharing environment at the department. These are known to be major factors in the failure of information system project design and implementation (Lyytinen & Hitschheim, 1987), and which ultimately contributed to the system implementation not being rolled out across the entire department. As Senge (1990) illustrates: New insights fail to get put into practice because they conflict deeply with internal images of how the world works, images that limit us to familiar ways of thinking and acting. (p. 174)
CONCLUSION
Prior to the onset of information-related initiatives, whether it is a new system, a push for new behaviors in managing information, or training faculty members with new tools, higher education institutions must examine information-sharing behaviors. In order to begin this examination, it is critical to understand the people who will drive, implement, and sustain the change. Similar to the resistance towards the implementation of the Web-based system in this case, if change is to occur, new systems and structures that can drive information-related change (e.g., rewarding people for sharing information) must be examined. As higher education institutions strive to improve access to information and integrate new technologies, it is clear that the information environment (including the people and their behaviors) is a critical deciding factor while striving for and designing new information management processes for decision making. In summary, improving the use of information technology in higher education cannot be the task of a single department, professor, or person. There are critical success factors that must be addressed concerning ownership, politics, and information sharing, despite the traditional challenges of information technology costs and maintenance. A national campus computing survey indicated that 62% of all higher education institutions have a strategic plan for information technology, yet there are still many difficulties associated with the norms and behaviors of an organization’s culture during implementation (Green, 1999). Therefore, when embarking on the infusion of information technology into a higher education setting, the possible non-technical challenges must be considered. Notes Morgan (1986): When we choose a technical system (whether in the form of an organizational structure, job design, particular technology) it always has human consequences, and vice versa. (p. 38)
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Politics of Information Management 55
This is important to realize so that a department or organization is not faced with trying to design a technical solution for a non-technical problem. In this case, the problems encountered in required data collection were not technical in nature, but rather a result of a pre-established set of norms among the faculty. Distinguishing these issues, where visible, is important for the design and implementation of information systems in higher education. Says Davenport (1997): Information and knowledge are quintessentially human creations and we will never be good at managing them unless we give people a primary role. (p. 3) This primary role is not merely a leadership position on a committee that approves a technology or makes new information polices in higher education. Instead it is the central role in which people, their behaviors, their information sharing attitudes, and the environment of an institution are examined, understood, and incorporated into the information-related change.
1. 2. 3. 4.
DISCUSSION QUESTIONS
What are the similarities and differences between the first attempt to implement the simple spreadsheet and the second relational Web-based system? What people, system, and information aspects drove the outcomes? What issues of information sharing for faculty appeared to drive their behaviors and reactions to the system? Is faculty access to student information necessary in order to carry out departmentwide planning? How does this impact university-wide goals and objectives? How can information technology leaders address non-technical issues that may interfere with the design and implementation of information systems?
REFERENCES
Davenport, T. H. (1997). Information ecology: Mastering the information and knowledge environment. New York: Oxford University Press. Green, K. (1999). The campus computing project, The 1999 national survey of information technology in higher education. Encino, CA. Retrieved from http:// www.campuscomputing.net/summaries/1999 Lyytinen, K., & Hitschheim, R. (1987). Information system failures — A survey and classification of the empirical literature. Oxford Surveys in Information Technology, 257-309. Morgan, G. (1986). Images of organization. Newbury Park: Sage Publications. Senge, P. (1990). The fifth discipline. New York: Currency Doubleday. This case was previously published in L. A. Petrides (Ed.), Cases on Information Technology in Higher Education: Implications for Policy and Practice, pp. 118-127, © 2000.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
56 Theodorou
Chapter IV
A DSS Model that Aligns Business Strategy and Business Structure with Advanced Information Technology: A Case Study
Petros Theodorou Technological Educational Institution of Kauala, Greece
EXECUTIVE SUMMARY
Advanced information technology must be aligned to business strategy and structure if premium earnings and competitive advantage has to be created. Strategy is mainly driven by the uncertainty of the environment where business works. Information technology is a key element of business structure in order to bypass environmental uncertainty. In this study, the case of a firm is examined that is located in Northern Greece and has to make some decision regarding the modernisation of the technology applied in production. An integrated system needs to be applied in order to manage enterprise resources, from warehouse and logistics to front office and client service. The ultimate purpose of this system is to increase flexibility and cut time of response to environmental changes, without increasing cost and inventory. In order to achieve the target, strategy is analysed first, in relation to environmental changes. Various types of flexibility are determined according to the firm’s uncertainty and variability. Finally, the correlation between flexibility and variability determines the type of Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A DSS Model 57
information technology that needs to be adopted and increase competitive advantage. The model proposed is based on the alignment theory and the strategy execution perspective. A “strategies map” model is constructed to help the decision regarding the strategy and information technology to overcome the variability problems and increase competitive advantage.
BACKGROUND
Information technology (IT) is generally accepted as a strategic tool that can create a competitive and distinctive advantage. IT can be used to catch-up the “primer” and decrease the pre-emption potential (Feeny & Ives, 1997). But these benefits can be attained only if a strategic perspective of IT selection and implementation is used. Strategic alignment theory provides the appropriate framework for the strategic utilisation of information technology applications (Figure 1; Theodorou, 2003). Henderson and Venkatraman (1996) argue that any given planning process must consider the interaction between the functional integration and the dimension of strategic fit. They identify four alignment perspectives: strategy execution, technology potential, competitive potential and service level. The fit among the internal and external domain (strategic fit) is critical to economic performance. Strategic fit creates a competitive advantage when it is combined with functional integration at the strategic and operational level. According to Luftman (1996) business strategy (in strategy execution and technology potential perspective) is the anchor domain that drives the planning process. In the strategy execution perspective, business structure is the pivot and IT structure is the impact domain. In the technology potential perspective, IT strategy is the pivot and IT structure is the area that will be affected by the change. Both in the competitive potential and in the service level perspective, IT strategy is the anchor domain. In the case of competitive potential, business strategy is the pivot and business structure is the impact. In the service level, IT structure is the pivot and business structure is the impact domain (Figure 2). In the beginning of this work, a literature review of strategic alignment is presented and will follow a theoretical discussion upon the elements of alignment: business strategy, strategic priorities and business structure. Furthermore, the case of a company will be analysed who wants to invest in an IT project in order to close the competition gap. The planning process will be approached in a strategy execution perspective based
Figure 1. Environmental Uncertainty
Structure
Strategy IT
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
58 Theodorou
Figure 2. Business Strategy
IT/IS Strategy
Strategic Alignment Process Business Structure
IT/IS Structure
on alignment theory among strategy, structure and IT. Business strategy will be analysed at first in order to determine strategic priorities. Strategic priorities and environmental uncertainty will determine the structural characteristics that information technology should enhance in order to create competitive advantage. The strategic alignment model will be applied in the case of the company in order to aid the information technology selection and planning process.
LITERATURE REVIEW
As previously mentioned in strategic alignment theory, the main participants for information technology alignment are business strategy and structure. Thus, we will first analyse the theoretical discussion of the main factors of the model and, finally, we will present the specific case study.
Manufacturing Strategy
Manufacturing strategy according to the organisation theory and business policy is distinguished in the level of the content and the level of the application procedure (Hayes & Wheelwright, 1984; Skinner, 1985; Storper & Scott, 1988). The content of manufacturing strategy consists of four strategic dimensions (DeMeyer et al., 1989; Panzar & Willing, 1981) that are: 1. 2. 3. 4.
Cost Quality Flexibility Dependability (dependable deliveries)
On the above set, Giffi, Roth, and Seals (1990) add two more dimensions, innovation and customer service. The order of the above strategic targets in manufacturing is the one that the leading Japanese manufacturing companies have adopted (DeMeyer et al., 1989; Nakane & Hall, 1991). Hill (1993, p. 45) adds to the previous list the following Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A DSS Model 59
Table 1. Priorities
Cost Quality Flexibility Depend/lity Innovation Service
Skinner, Buffa, Wheelright
1 2 3 4
Giffi, Roth, Seals
1 2 3 4 5 6
Chase
1 2 3 4 5
Swing & Way, Ferdows, DeMeyer
4 1 3 2
Hall & Nakane
4 2 5 3 6
… priorities: colour, product’s range, design, brand name and customer support. Ferdows and De Meyer (1990) adopt a different prioritisation of manufacturing dimensions by prioritising quality before dependability, flexibility as a third priority and cost at the end. The same order is adopted by Swing and Way (1995). According to Hall (1987), quality becomes the first goal, second comes dependability, followed by cost and flexibility. A slight difference is observed between Ferdows, De Meyer, and Hall in the order of flexibility and cost. Hall and Nakane (1990) add to the previous list another two dimensions: culture (that precedes quality) and innovation as the sixth priority. Noble (1997) changed Hall’s progression, with a distinction among dependability and delivery that comes after, and set innovation at the end of the list. Regarding Skinner’s prioritising, Chase et al. (1992) believe that service is the fifth dimension, unrelated to dependability. Some studies have focused only on quality (Adam, 1994; Flyn et al., 1994), while others have focused on productivity as a criterion for prioritising (Hayes & Clark, 1985; Schmenner, 1988, 1990, 1991; Noble, 1997). We can partly summarise the previous discussion in Table 1. To reach business and corporate goals, supportive cost, time, quality and flexibility targets must be developed (Skinner, 1969). These manufacturing goals are achieved and sustained by a “pattern of decisions” (Hayes & Wheelwright, 1984) and have to be aligned with business and IT strategic targets. According to Skinner (1985), these priorities integrate manufacturing and business strategy. Skinner (1996) contends that one of the major problems in implementing strategic objectives is a proven inability of management to step back and access the coherence of their strategies. Skinner (1996) endorses that insufficient research in the process of strategy making has held back the adoption of manufacturing strategy priorities and ideas.
Business Structure
IT proved capable for the creation of competitive advantage, as it alters the competition rules among the industry participants. Unfortunately, research indicates that competitive advantage cannot be sustained infinitely due to the external environment’s volatility. Strategy should adapt according to circumstances and structure should accomplish the strategic direction. Strategic alignment and strategic fit models estabCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
60 Theodorou
lished in strategic management literature try to explain that change (Earl, 1990; Morton, 1991). Among the variables in strategic alignment models, structure obtains a central role. Structure should be organised in a flexible way in order to interface with environmental uncertainty and take advantage of IT potential. Structure is approached from the internal and the external perspective. From the internal perspective, the estimation of the structural variables can be used in order to specify the organisational form. Organisation theory and design define structure using structural dimensions and structural variables. Pugh et al.’s (1968) theory was based on the following structural dimensions: degree of standardisation, degree of routines, formalisation of procedures, specialisation of roles, stipulation of behaviour, concentration of authority and line control of workflow. Pugh used these dimensions for the classification of a sample of “52” English organisations and derived seven structural forms. Generally, it can be observed that the exact form and number of the structural dimensions is not universally accepted (Sanchez, 1993; Rich, 1992). Thus, some researchers adopted a more generic definition of structural dimensions and progress in a more detailed level by the definition of the structural variables. Blackburn and Cummings’ (1982) theories are based on this generic concept and define the following three core structural dimensions: complexity, formalisation and centralisation (Robbins, 1990). Earlier, Hage (1965) made that distinction and referred that those structural characteristics can vary in their presence from high to low, thus proposing a qualitative construction for the measurement of core dimensions. Later, in 1991, Miller underlined the new capabilities which the role of information technology brings over the previously mentioned dimensions. Furthermore, Burton and Obel (1995) add to the previous list the following three dimensions: configuration, coordination and control. A different viewing angle is proposed from the organisational information processing theory; communication, co-ordination and co-operation are the essential structural dimensions (Stock & Tatikonda, 2000). Edmondson (1990) and Moingeon et al. (1998) referred to organisational learning as another important determinant of organisation structure. Organisational learning determines how a firm develops its capabilities and competencies over time (Eden & Spender, 1998). Furthermore, Miller (1987), as well as Raymond et al. (1995), underlined the importance of structural sophistication, whose role emerged due to information technology. The previous discussion partly can be summarised in Table 2. In that case, a set of structural variables was applied that determines the general parameters of structural design. The general set of structural variables adopted was based on the work of Blackburn and Cummings (1982), Reimann (1974), Mintzberg and Quinn (1996), and Parthasarthy and Sethi (1992). The effect of those variables on performance determines the structural effectiveness. Those structural parameters are the following: formalisation, complexity, centralisation, co-ordination-co-operation, control and reward system. The above mentioned parameters are determined from the following structural variables: (x1) level of programmed work co-ordination, (x2) level of decentralisation in decision-making, (x3) level of defined procedures for control, (x4) level of interdepartmental co-operation, (x5) use of liaison personnel for co-ordination, (x6) low levels of hierarchy, (x7) interdepartmental communication without rules, (x8) reward system based on skills, (x9) infrequent management decision control, (x10) level of work specialisation, (x 11) level of task non-standardisation (versus written tasks), (x12) range of training variety (versus specialisation), (x 13 ) level of responsibilities nonstandardisation, (x14) flexibility in production schedule, (x15) concurrency in design, (x16) Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A DSS Model 61
Table 2. Structural Dimensions
Pugh et al. Blackburn & Cummings Burton & Obel Stock & Tatikonda Edmondson, Moingeon et al. Miller, Raymond et al.
Standardisation, Routines, Formalization, Specialisation, Stipulation, Authority, Control Complexity, Formalisation, Centralisation Complexity, Formalisation, Centralisation, Configuration, Co-ordination, Control Communication, Co-ordination, Co-operation Communication, Co-ordination, Co-operation, Organisational Learning Communication, Co-ordination, Co-operation, Structural Sophistication
…
…
production structure, (x17) production in quality (variety in design), (x18) frequency of change in production level, (x19) number of suppliers, (x 20) frequency of subcontracting, and (x21) departmentalisation. Flexibility of production is nowadays more imperative than ever before for every business, because flexibility enables a fast response to the continuous variations of demand.
THE CASE STUDY
The company insisted upon anonymity and decided to withhold financial or other data that indirectly identified it. The firm belongs to the food industry and is also a supply chain that employs approximately 200 permanent blue and white collar staff. The company has four different parts, each of which is a different department: production, central warehouse, headquarters and two retail outlets (Figure 3). Production is in the same location as the central warehouse and the headquarters. The place where the production department operates was constructed later as an extension of the central building where the headquarters and warehouse are located. The main function of production is the labelling and packaging of selected products, partly manufactured by three different subcontractors, each in a different location. Labels have the brand name of the firm and products are promoted at the firms’ stores. Central warehouse is responsible for the dispatch of goods to the stores. Moreover, inventory of materials used in production are also kept at the central warehouse. Stores are supplied by the warehouse and are located in different strategic market places (S. Market). Each store has a small storage capacity and the provision is made by tracks from location 1 (Figure 3). A pull system of orders exist among the stores and the central warehouse. The organisational structure of the firm is determined by high formality, decision centralisation in the directors, high work specialisation and responsibilities strictly determined along with job description. Each store has separate director, a member of the board, who is completely responsible for the management of the store: ordering, shelve management, hiring, and so forth. The warehouse’s director is also a member of the board, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
62 Theodorou
Figure 3. Supplier 1
…
Location –1…
Location 1 1 Location
Headquarters Production
Warehouse
S.Market 1
S.Market 2
Location 2 Location 3
Figure 4. Board of Directors
CEO Director4 CFO Director1 S. Market1
Director1 S. Market2
Director5 CIO Director2 Warehouse
Director3 Production
1
Director 1
2
Director 2
3
Director 3
4
Director 4
5
Director 5
and is completely responsible for the appropriate operation and goods dispatching to the retail outlets. The previously mentioned directors have part of the common share of the company as a whole; they take part in decision making at the board of directors and are jointly responsible for the company’s investment program. The remaining part of the common equity belongs to the CFO and CEO, who are positioned at the headquarters and are also members of the board of directors (Figure 4). Information technology in this firm is in its infancy, only a LAN exists with a Novell operating system and six workstations located at the headquarters under the management of CIO. CIO is in close co-operation with CFO. The system administrator is responsible for the appropriate functionality of the system and co-operates with the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A DSS Model 63
finance division for the payroll, the accountancy system, the balance and income statements, the invoice entry, budgeting, materials and product monitoring, placing orders, inventory control, and so forth. A central database for the products and materials is installed on an IBM AS/400 system and is updated from six workstations, mainly for data entry. Moreover, four stand-alone PCs exist for MIS in Access and Excel. Each of the shops also has a PC for local handling of orders and merchandise management. The pricing policy is proposed by CFO and it is under discussion within the board of directors. The warehouse doesn’t have any computerised system and the control and movement of goods is made with cards — something like a kanban system.
The Problems
The first contact with the company was made in late 1999. CIO suggested to the board of directors an investment programme in information technology. The project basically proposed a barcode system which would be installed at the front office (at the shops) in order to decrease the entry time of the cashier and thus, the waiting time of the customer’s queue. That system was installed at most of their competitors with very good results and, for this reason, it was believed a necessity in order to close the competition gap and increase competitive advantage. The company wanted a consulting service regarding the investment on a front office barcode system. The barcode system doesn’t create switching cost for the client and cannot create sustainability of competitive advantage unless the system is engaged with the unique structural characteristics of the firm. But after an examination and a discussion with management, it was found that the structure lacks the characteristics that the system needs to be aligned with. For example, the software that monitors the warehouse and the movement of merchandise and materials was not properly informed in time and other indication of materials get someone from the warehouse and other from the system. That happened because the warehouse was not linked onto the system (AS/400), and was not linked due to certain perceptions. Decision centralisation and notification and control mechanism was an obstacle, since direct control prohibited the free flow of the information. Moreover, unfamiliarity with advancements in networking and PCs directed employees to work with cards. Decision making was centralised to the director and cooperation with the other departments was made by cards under high formality. This time lag of information updates to the system resulted in inaccuracy, thus, when information entered it was too late, as major changes had happened in the mid-span to the warehouse. A just-in-time update of the system was a necessity. Inconsistency existed among the inventory held in the warehouse and the inventory reported by the system. Thus, increased personnel was used for continuous stocktaking.
The Model Proposed
The CIO approach seemed like Luftman’s technology potential perspective, but he underestimated the role of business strategy and structure. What was proposed was a strategy execution perspective based on the alignment theory. The model that was formed, prioritised, at first, the strategic targets according to importance for the competition in the next three to five years, and further determined the environmental uncertainty and the structural variables that have to fit with the information technology proposed. The general structure of the model proposed is given in Figure 5. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
64 Theodorou
Figure 5. Cost, Quality, Dependability Flexibility, Service…
Strategic Alignment Process Business Structure IT/IS
?
Coordination, Formalization, Centralization, Complexity, Control, Reward System…
Strategy Execution Perspective
Business Strategy
Thus, business strategy was the anchor domain, organisations’ structure the pivot and, finally, IT infrastructure the impact domain.
The Applications and The Solution
Under such circumstances it seemed that a front office solution will not work properly if appropriate support from the back office is not given. It was proposed that computerisation should first start at the back office, and mainly from the warehouse, which is the heart of the firm, otherwise the front office will fail to achieve its ultimate purpose. In our analysis, we started from ends in order to determine the means. Thus, the business strategy was analysed at first, based on the strategic priorities previously mentioned. We first analysed the competition in close co-operation with the management and using a qualitative Likert scale determining the importance of each strategic priority. A rating of 1 to 6 was used to measure the importance of each priority. Scale increased in a positive way with the importance of the priority. At this stage it was important to assess the importance of each priority for the next three to five years. Furthermore, the management’s perception was compared with the achievement of the other competitive firms in the branch. At the end, the following strategic priorities map was formed (Figure 6). Figure 6 shows that flexibility will be of greatest importance for the competition in the next three to five years, with dependability and service next. It should be mentioned that the previous prioritisation does not imply that cost, quality and innovation was not important. On the contrary, it means that the firm, in relation to the competition, has already achieved important competitive status on those targets, thus in the future they will not be of top priority. It can be observed that this prioritising is quite similar to those previously proposed by DeMeyer et al. and Giffi, Roth, and Seals in that cost is not in Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A DSS Model 65
Figure 6. Innovation $ Flexibility
"
StrategicPriorities Quality
Service
Cost
Depend/lity
the highest priority list. The difference is that flexibility is of higher importance than dependability and service (Theriou, 2002). Generally, flexibility, internal or external, refers to the ability of a production system to react and adjust economically and quickly to the variations (environmental uncertainty) inside and outside the company (Frazelle, 1986). The environmental uncertainty factors that were examined are the following: •
•
• • • •
Demand variability: Caused due to qualitative and quantitative variations. The qualitative mainly refer to variations of consumer preferences in relation to products’ characteristics, such as colour, smell, design, and so forth. Efforts to satisfy the qualitative variations result in increased range of products produced or supplied. The quantitative demand variation for a product usually depends on seasonality. Supply variability: Derived due to three basic reasons: (a) increase of materials and products variety. When the range of ultimate products is increased (increase in existing models variety, module products, to cover various qualitative preferences of consumers) then the range and variety of raw and intermediate materials also increases. The overall range of orders increased but small quantities ordered to many different suppliers. The number of suppliers is increased drastically and offsets the decrease in quantity ordered from each supplier. Thus, the complexity of logistics increases, (b) variations in the quality of the ordered materials and the time of delivery, (c) introduction of new products. Process variability: Is caused by the introduction of modern technologies and the application of novel administrative techniques. Product variability: Is increased because of the variety in goods produced and from the introduction of new products within the range of materials currently supplied. Workforce variability: Is caused by raining, absences, strikes, and so forth. Equipment variability: Is derived from planned and unplanned maintenance, repair, set-up, and so forth. The effect of these disturbances and variations in performance can be minor in a firm which succeeds in the strategy of flexibility.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
66 Theodorou
Table 3. EXTERNAL FLEXIBILITY Demand variability Supply variability
INTERNAL FLEXIBILITY Process variability Product variability Workforce variability Equipment variability
In order to decide which type of flexibility the enterprise needs, it was first defined which type of variance the firm faces. In our case, the environmental analysis show that uncertainty derived mainly from demand and supply variability (Table 3). Based on the discussion with the upper management and the environmental analysis, it was found that the firm — due to the volatile environment — needed to invest in the strategic target of external flexibility (Table 3) in order to overcome demand and supply variability. Labelling and packaging are very important functions because they determine the look of the product on the shelves. The look of the product is one of the most important marketing factors that has a high impact on customers’ choices. Other important factors are the product’s position on the shelf, of course the price and, finally, quality which will lock the consumer’s decision in relation with the availability (dependability). But all of the products manufactured by the company are competitive in terms of cost and pricing and have a good position in the stores’ shelves. The problem is mainly with dependability. The products produced which contribute significantly to the profit exhibit high variation in demand. Costumers’ choices on these products is not constant and the company suffers from high instability due to substitute competition. The instability is transferred to the ordering system and increases the uncertainty of supplies. The delivery time fluctuates but within low limits, and the main source of variability is the quantity ordered. This problem could be decreased by using a system for ordering and forecasting. According to Brown et al. (1984), manufacturing flexibility was distinguished in eight types. • • • • • • •
Machine flexibility: The ability to replace, change and assemble the parts of a machine in minimal time. Process flexibility: The ability to modify the necessary steps to complete a task. Product flexibility: The ability to produce new products from the existing spectrum of parts from which the end items are composed. Routing flexibility: The ability to vary the frequency of a machine serving without interrupting the sequence of production line. Volume flexibility: The ability to operate efficiently in different quantities of production. Expansion flexibility: The ability to expand production according to the occasional needs, easily and periodically. Process sequence flexibility: The ability to alter the sequence of the different techniques of production are formed or the ability to interchange the ordering of several operations for each part type.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A DSS Model 67
Figure 7. Environmental Variability
Business Strategy
Business Structure
•
- Demand - Supply External Flexibility: - Product - Volume - Expansion - Process - Formalization - Complexity - Centralization - Coordination …
Production flexibility: The ability to alter quickly and economically the range and combination of parts a final product is comprised of and which can be produced in an FMS (the FMS can not be flexible, as far as production is concerned, unless all the above mentioned flexibilities are accomplished).
In our case the focus should be on product, volume, expansion and process flexibility, as external flexibility is the key to bypass the environmental uncertainty (Figure 7).
IT Applications as Flexibility Tactics
Flexibility in co-ordination with IT offers the enterprise the ability to achieve competitive advantage (Kenney & Florida, 1993; Sayer, 1986). Competitive advantage can be increased if fit is gained among IT, business structure and business strategy. A classification of information technologies/techniques can be made into three groups. •
•
•
Flexible engineering design automation (FEDA): FEDA refers mainly to automation which concern the product’s design. This kind of automation includes technologies and techniques such as computer-aided design (CAD), computeraided engineering (CAE), solid modelling (SM), and finite element analysis (FEA). Flexible manufacturing automation (FMA): The next step in production procedure after designing the product is manufacturing. The basic target of that kind of IT applications is machine programming. Techniques and technologies of that kind include: numerical control (NC), direct numerical control (DNC), computer numerical control (CNC), robots, flexible manufacturing systems (FMS), automated guided vehicles (AGVS), and automated storage/retrieval systems (AS/RS). Flexible administrative planning and control automation (FAPC): In this category we find computer applications in accounting, logistics, warehouse management, management of stocks, quality control, and so forth. Also, management of infor-
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
68 Theodorou
Table 4. EXTERNAL FLEX.
INTERNAL FLEXIBILITY
Information Technology
Variabilities DEMAND
SUPPLY
PROCESS
PRODUCT
WORKFORCE
EQUIPMENT
CAD
MRP I,II
FMS
CAD
ROBOTS
OPT
CAE,
JIT
NC/CNC
GT
FMS
FMS
FEA EDI
EDI
AGVS
EDI
AGVS
SFC
EPOS
AS/RS
OPT
CAE
AR/RS
NC/CNC
CE
CAQC
GT
CE
NC/CNC/DNC
CAPP
SFC CE CAPP CAQC
mation systems (MIS) and decision support systems (DSS) are also included. Other systems in this category are: electronic data interchange (EDI), electronic point of sales (EPOS), optimised production technology (OPT), group technology (GT), material requirements planning (MRPI), manufacturing resource planning (MRPII) (Manthou et al., 1996), just-in-time (JIT), computer-aided production planning (CAPP), shop floor control (SFC), factory data collection systems (FCS), data acquisition systems (DAS), computer-aided quality control (CAQC), concurrent engineering (CE), and so forth. (Theodorou, 2003; Nakane & Hall, 1991). Based on this taxonomy, a correlation can be made among flexibility, information technology and variability, shown in Table 4. In our case, the technologies proposed are CAD for labelling and packaging, EDI for the suppliers, EPOS for the front office (part of the barcode), MRPI for the orders, and JIT and CAQC for the suppliers checking. Following Luftman’s strategy execution perspective, we determined flexibility (Figure 6) as the most important priority according to the environmental analysis and among a set of strategic targets derived from the literature review section. Flexibility has been determined as the anchor domain and business structure as the pivot. Thus, structural variables should be redefined appropriately in order for information technology to make full potential of structural characteristics and increase competitive advantage (Theodorou, 1996). Generally, formalisation, complexity, centralisation, co-ordination and control are the structural parameters that have to be redesigned (Figure 5). A discussion with management based on a critical success factor analysis and the empirical evidence determine that the following structural variables should be redefined (Theodorou, 2001): (x2) level of decentralisation in decision making, (x4) level of interdepartmental coCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A DSS Model 69
Figure 8. Structural Design X20
X2 5 4
Before X4
3
X19
After X10
2 1
X6
X11
0
X18
X12 X15
X13 X14
operation, (x6) low levels of hierarchy, (x10) level of work specialisation, (x11) level of task non-standardisation (versus written tasks), (x12) range of training variety (versus specialisation), (x13) level of responsibilities non-standardisation, (x14) flexibility in production schedule, (x15) concurrency in design, (x18) frequency of change in production level, (x19) number of suppliers, and (x 20) frequency of subcontracting (Figure 8). Regarding the decentralisation, it was decided that the employees at the stores, as well as the employees in production department, do not need any permission to order materials and products from the warehouse, they are only obliged to follow the suggestions of the orders system (DSS). Furthermore, in order to increase flexibility, it was decided that they can freely communicate for the commissioning of goods if inventory is below of the predetermined security level, and the ordering system as well as the MRP will alarm. Level of interdepartmental co-operation increased and decision making regarding the orders fall to the level of the employees. A decision support system (MRP I) will provide the supporting base. The system was taking as input the lead time, the amount of security inventory, the quantity demanded, and so forth. Decision making will be centralised only regarding the orders to the subcontractors and the suppliers outside the company. A WAN based on a ISDN line will link location 1 with store 1 and store 2 (Figure 3), with no connection among the stores. A materials requirement planning system (MRP I) will control orders according to inventory from the warehouse and the production department to the stores. A neural network system was placed in the stores at the front office and will be used for sales forecasting. The bar code at the front office (stores) will adjust forecasting and update the merchandise shortages. Further, the MRP, taking into account lead-time and the lagtime of system’s notification, will put on the orders (Figure 8). Warehouse and the production department will co-operate in order deliveries, to be on time at the stores and fulfill demand. The forecasting system should use long and short-term estimates and will determine inventory. Furthermore, the capacity management mechaCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
70 Theodorou
Figure 9. Outlets
Forecasting
Production
Warehouse/Heads
Production Planning
MRP I
Inventory Mgmt Ordering MRP I
nism will inform the production planning system for the preparation of a master production schedule and materials requirements, along with purchasing from suppliers. In such a system the hierarchy levels give ground into interdepartmental cooperation regardless of the employees position. Responsibilities are defined in a wider context and everyone is responsible for the good operation of the system. The employees need a wider training in the whole system (MRP) and job rotation is a necessity. Employees are trained in each task of their department and discrimination is only made regarding the control mechanism (Figure 9). The design of the labels and the packaging in the production department requires close co-operation with the stores (where is the sales promotion). Interdepartmental cooperation should be kept at high levels in order to achieve concurrency in design and fast and flexible response to market needs and competition demands. The ultimate target for the system is flexibility and to decrease the time of response to demand fluctuations. The control will be made upon this strategic target.
CHALLENGES/PROBLEMS FACING THE ORGANISATION
Information technology does not belong to conventional fixed automation of mere bolts and nuts, but as a strategic weapon. Due to processors heritage, which is the reprogrammable character of IT, the strategy of flexibility seems to fit well. Because of that characteristic, information technology’s selection and planning should be made under the strategic alignment perspective. In our case, the company started from the front office and decided to replicate the application that the other companies apply as a “third mover”, ignoring that IT should take advantage of certain structural characteristics based upon certain business strategic targets (alignment). The bar code that was proposed could not cut off waiting time if appropriate support from the back offices were not given. In the strategic level the overall system should be aligned with business Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A DSS Model 71
strategy and structure. There was a missing link and what was missing was that competitive advantage is not created by technology per se, but from the technology and structure fit based on certain strategic objectives. Thus, a more holistic approach was proposed using Luftman’s strategy execution perspective. In our framework of analysis (alignment model), we included the strategic priorities of the firm and a link was created among the strategic priorities and the structural variables that determine the structural design model. Further, the alignment perspective was investigated taking into account the technology proposed. That case opened the ground for further work on the subject. The analysis presented was on a strategic level, further analysis interprets the structural variables previously mentioned and certain business processes in a BPR program, on the target that those processes must be linked in a more detailed level on strategic priorities in a time horizon of four to five years. Strategic alignment theory needs to be taken into account in every IT project in order to avoid failure and highlight the distinctive competencies of the firm. Management noticed that competitive firms gained advantage (increased sales) due to shorter service time at the cashiers. Shorter time achieved was due to technology adopted. It was recognised that in order to catch-up, investment was needed in a similar system adopted by most of competitors: a front-office barcode system. The firm asked for advise in order to choose a provider, as most of the systems were off the shelf. The firm, in the past, did not make any important investments in flexible automation and it was time to reconsider the IT advancements. The firm wanted to act like Feeny and Ives’ second mover, which means that after they noticed the positive results of automation, then they consider replication. But the decision to be the prime or second mover is a matter of strategy and, except for money, it needs time. Moreover there was a missing link, what was missing was that information technology per se is not enough to gain competitive advantage. The simplistic approach is: “What system did the competitors adopt? We will implement the same as it is a matter of money”, but this is only half true and it is an easy and obvious conclusion. It is not a matter of money only, but also a matter of time and a matter of strategic alignment. The concept of strategic alignment presented and analysed by Theodorou in IT-based Management Challenges and Solutions (edited by A. Joia), also in the MIT 90’s framework (Venkatraman in S. Morton, Ed., The Corporation of the 90’s) and in the study of IBM on Enterprise Wide Information System (EwIS) was reviewed. According to this concept three forces have to be examined and should be aligned: business strategy, structure and IT. Theodorou developed a comprehensive and practical system for examination of strategy and structure in order for IT to be aligned. What should be discussed is whether the firm should follow the strategy execution, the technology potential, the competitive potential or the service mechanism in order to be aligned. Both mechanisms are well explained in Luftman’s book Competing in the Information Age. Moreover, another challenge is (except the strategic appraisal) to develop a financial approach utilising the criterion of net present value and internal rate of return or, more sophistically, the real options’ Black and Scholes formula and further evaluate the delay of the project or any other available option. At the completion of the project, a more detailed business process reengineering approach needed to be taken in order to guide the change process regarding the structural characteristics in the form determined from the alignment model. That chalCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
72 Theodorou
lenge, to support the shifting between the old and new organisational form, was overcome with the application of the matrix of change (presented by Brynjolfsson, Renshaw, & Alstyne, //ccs.mit.edu/papers/CCSWP189/CCSWP189.htm). Today, a year after the system was implemented (three years of implementation and adaptation), important operational and strategic benefits have been encountered. Specifically, the average queuing time decreased about 40%, and increased sales by 15%. Customers prefer the firm in relation to competition due to shorter service time and availability of goods on the shelves. Increased sales also increased the need for personnel (cashiers) at the front office, but the speed of the barcode system offset that need and saved the firm from the additional cost. Moreover, a 15% inventory reduction was encountered at the warehouse, as well as at the personnel. Inventory was almost eliminated at the stores, thus, a total decrease of around 30% was achieved in operational cost. Conversely, an increase of maintenance and operational cost of the system encountered about 10%, but only for the first three years of implementation-adaptation. That cost was counterbalanced with long-term technical support contracts with providers. That maintenance and operational cost of the system has now started to decrease due to learning economies. The most important challenge for the firm now is to follow a growth strategic pattern by opening stores in other locations and thus taking advantage of the increased operational flexibility. That scheme is under consideration in the strategic plan and various models for financing are examined (like franchising) as well as location economics for store locations.
REFERENCES
Adam, E. E. (1994). Alternate quality improvement practices and organization performance. Journal of Operations Management, 12, 27-44. Blackburn, R., & Cummings, L. (1982, December). Cognitions of work unit structure. Academy of Management Journal, 836. Browne, J., Dubois, D., Rathmill, K., Sethi, S. P., & Stecke, K. E. (1984). Classification of flexible manufacturing systems. FMS Magazine, 2(2), 114-117. Burton, R., & Obel, B. (1995). Strategic organizational diagnosis and design. Kluwer Academic Publisher. Chase, R. B., Kumar, K. R., & Youngdahl, W. E. (1992). Service based manufacturing: The service factory. Production and Operations Management, 1(2), 175-84. De Meyer, A., Nakane, J., Miller, J. G., & Ferdows, K. (1989). Flexibility: The next competitive battle (The manufacturing future survey). Strategic Management Journal, 10, 135-144. Earl, M. (1990). Information management: The strategic dimension. Oxford University Press. Eden, C., & Spender, C. (1998). Managerial and organizational cognition. Sage Publications. Edmondson, A., & Moingeon, B. (1990). When to learn how and when to learn why: Appropriate organizational learning processes as a source of competitive advantage. SagePublications.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A DSS Model 73
Feeny, D. F., & Ives, B. (1997). IT as a basis for sustainable competitive advantage. In L. Wilcocks, D. Feeny, & G. Islei (Eds.), Managing IT as a strategic resource. McGraw Hill. Ferdows, K., & De Meyer, A. (1990). Lasting improvements in manufacturing performance: In search of a new theory. Journal of Operations Management, 9(2), 16884. Flynn, B. B., Schroeder, R. G., & Sakakibar, S. (1994). A framework for quality management research and associated measurement instrument. Journal of Operations Management, 11, 339-66. Frazelle, E. H. (1986, March). Flexibility: A strategic response in changing times. I.E., 1720. Giffi, C., Roth, A. V., & Seal, G. (1990). Competing in world class manufacturing: America’s 21st century challenge. Homewood, IL: Business One Irwin. Hage, J. (1965, December). An axiomatic theory of organizations. Administrative Science Quarterly. Hall, R. W. (1987). Attaining manufacturing excellence. Homewood, IL: Dow JonesIrwin. Hall, R. W., & Nakane, J. (1990). Flexibility: Manufacturing battlefield of the ’90s: Attaining manufacturing flexibility in Japan and the United States. Wheeling, IL: Association for Manufacturing Excellence. Hayes, D. R., & Wheelwright, S. C. (1984). Restoring our competitive edge. New York: John Wiley & Sons. Hayes, R. H., & Clark, K. B. (1985). Exploring the sources of productivity differences at the factory level. In K. B. Clark, R. H. Hayes, & C. Lorenz (Eds.), The uneasy alliance: Managing the productivity-technology dilemma. Boston: Harvard Business School Press. Henderson, J. C., & Venkatraman, N. (1996). Aligning business and IT strategies. In J. Luftman (Ed.), Competing in the information age. New York: Oxford. Hill, T. (1993). Manufacturing strategy: The strategic management of the manufacturing function (2nd ed.). The Macmillan Press Ltd. London Business School. Kenney, M., & Florida, R. (1993). Beyond mass production: The Japanese System and its transfer to the U.S. New York: Oxford University Press. Luftman, J. (1996). Competing in the information age. New York: Oxford. Manthou, V., Vlahopoulou, M., & Theodorou, P. (1996, August). The implementation and use of material requirements planning system in Northern Greece: A case study. International Journal of Production Economics, 45(1-3), 187. Miller, D. (1987). Strategy making and structure: Analysis and implications of performance. Academy of Management Journal, 30(1), 7. Mintzberg, H., & Quinn, J. B. (1996). The strategy process. Englewood Cliffs: PrenticeHall. Moingeon, B., Ramanantsoa, B., Metails, E., & Orton, D. (1998). Another look at strategystructure relationships. European Management Journal, 16(3), 297. Morton, M. S. (1991). The corporation of the 1990’s. Oxford University Press. Nakane, J., & Hall, R. W. (1991). Holonic manufacturing: Flexibility — The competitive battle in the 1990s. Production Planning and Control, 2, 2-13.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
74 Theodorou
Noble, M. (1997). Manufacturing competitive priorities and productivity: An empirical study. International Journal of Operations and Production Management, 17(1), 85-99. Panzar, C., & Willing, R. (1981). Economies of Scope. American Economic Review, 71(2), 268-272. Parthasarthy, R., & Sethi, P. (1992). The impact of flexible automation on business strategy and organizational structure. Academy of Management Review, 17(1), 86111. Pugh, D., Hickson, D., Hinings, C., & Turner, C. (1968, June). Dimensions of organizations structure. Administrative Science Quarterly, 7. Raymond, L., Pare, G., & Bergeron, F. (1995). Matching information technology and organizational structure: An empirical study with implications for performance. European Journal of Information Systems, 4(3). Reimann, B. C. (1974, December). Dimensions of structure in effective organizations: Some empirical evidence. Academy of Management Journal, 693-708. Rich, P. (1992). The organizational taxonomy: Definition and design. Academy of Management Review, 17, 758. Robbins, S. (1990). Organization theory: Structure designs and applications. Prentice Hall. Sanchez, J. C. (1993). The long and thorny way to an organizational taxonomy. Organizational Studies, 14, 73. Sayer, A. (1986, Spring). New developments in manufacturing: The just-in-time system. Capital and Class, 28. Schmenner, R. W. (1988). Behind labor productivity gains in the factory. Journal of Manufacturing & Operations Management, 1(4), 323-38. Schmenner, R. W. (1991). International factory productivity gains. Journal of Operations Management, 10(2), 229-54. Schmenner, R. W., & Rho, B. H. (1990). An International comparison of factory productivity. International Journal of Operations & Production Management, 10(4), 1631. Skinner, W. (1969, May-June). Manufacturing: Missing link in corporate strategy. HBR, 136-145. Skinner, W. (1985). Manufacturing: The formidable competitive weapon. New York: Wiley. Skinner, W. (1996). Manufacturing strategy on the ‘S’ curve. Production and Operations Management Journal, 5(1), 3-13. Stock, G., & Tatikonda, M. (2000). A typology of project-level technology transfer processes. Journal of Operations Management, 18, 723. Storper, M., & Scott, A. (1988, June 16-18). The geographical foundations and social regulation of flexible production complexes. Paper presented in the International Conference on Regulation Theory, Barcelona. Swink, M., & Way, M. H. (1995). Manufacturing strategy: Propositions, current research, renewed directions. International Journal of Operations & Production Management, 15(7), 4-26.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A DSS Model 75
Theodorou, P. (1996). The restructuring of SME production: The strategy of flexibility. In I. Siskos, C. Zopoynidis, & C. Pappis (Eds.), The management of SME in front of 2000. University Publications of Creta in Greek. Theodorou, P. (2003). Strategic information systems: The concept of alignment. In L. Antonio (Ed.), IT-based management challenges and solutions. Hershey, PA: Idea Group Publishers. Theodorou, P., & Dranidis, D. (2001, July). Structural design fit: A neural network approach. In 2nd European Conference on Intelligent Management Systems in Operations, Operational Research Society, University of Salford. Theriou, N. G. (2002). The long-range goals or objectives and targets and the relationship between them: A survey of the top Greek companies. Review of Economic Sciences, TEI of Epirus, 2(3).
Petros Theodorou holds a PhD in strategy and management of information systems. Moreover, he holds the degrees of a post doctorate in business strategy and finance and an HBSch in economics. Theodorou is currently working as a senior researcher at the Department of Strategy and Planning in the Public Power Corporation SA (Athens), a vertically integrated monopoly of the Greek electricity sector. In addition, he is working at the Technological Educational Institution of Piraeus. His previous working experience was in Computer Logic, Astron/PEP, and so forth. Theodorou has also worked as an adjunct professor at the Technological Educational Institution in Thessaloniki and as a senior researcher at the Aristoteles University of Thessaloniki. The author is a member of NYA, Economic Chamber of Greece, Management of Technology Organization, Who’s Who Marquis, and other organisations, and holds positions on the board of directors of various firms. He has published as the author and co-author in various sources, such as Idea Group Publishing, Elsevier Science, the University Publishers of Crete, and others.
This case was previously published in the Annals of Cases on Information Technology, Volume 6/2004, pp. 157-176, © 2004.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
76 Guthrie & Shayo
Chapter V
The Columbia Disaster: Culture, Communication & Change Ruth Guthrie California Polytechnic University, Pomona, USA Conrad Shayo California State University, San Bernardino, USA
EXECUTIVE SUMMARY
The National Aeronautics and Space Administration (NASA) is a government organization, founded to explore space to better understand our own planet and the universe around us. Over NASA’s history, there have been unprecedented successes: Apollo missions that put people into space and walking on the moon, the remarkable findings of the Hubble space telescope and the Space Shuttle Program, allowing astronauts to perform scientific experiments in orbit from a reusable space vehicle. NASA continues to be a source of national wonder and pride for the United States and the world. However, NASA has failures too. In February of 2002, the Space Shuttle Columbia disintegrated as it returned to earth. This event occurred 16 years after the Space Shuttle Challenger exploded during take-off. As information was collected, investigators found that many of the problems uncovered during the Challenger investigation were also factors for Columbia. Underlying both disasters was the problem of relaying complex engineering information to management, in an environment driven by schedule and budget pressure. Once again, NASA is looking at ways to better manage space programs in an environment of limited resources.
ORGANIZATIONAL BACKGROUND
NASA was founded in 1958 to explore space. A year earlier, the Soviet Union beat the United States into space by launching Sputnik, the first satellite. In the United States, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
77
this was seen as an embarrassment, and the need for a space program was pressing. Only a few short months after the formation of NASA, the first American space missions were launched. In 1969, NASA’s Apollo 11 mission put the first humans on the moon surface. NASA’s space program has changed the way mankind views the earth and helped bring about many important scientific findings that have resulted in numerous “spin-offs” in science, technology and commerce. After many other successful manned space flights, the Space Shuttle Program was initiated. The goal was to develop a reusable vehicle for frequent access to and from space. After nine years, the first shuttle, Columbia, was launched from Kennedy Space Center in 1981. Columbia was a remarkable success, though the promise of frequent access to space has never been realized. (Columbia flew 24 missions, the most of any shuttle in the fleet.) Today, NASA is renown for its discoveries and explorations in space — both manned and unmanned. NASA is truly a unique governmental agency with the lofty mission shown in Figure 1. In 1986, the world was shocked and saddened as the Challenger exploded during takeoff. Seven astronauts were dead along with the first civilian to ride the shuttle, Christy McAuliffe, an elementary school teacher. The Rogers Commission, formed by an executive order from President Reagan, found that design flaws contributed to the Challenger’s explosion. During the investigation, it was revealed that NASA engineers and management knew about the problems with the O-rings and failed to act on the information that was available. The report was also critical of safety procedures and Space Shuttle Program management. Sixteen years later, on February 1, 2003, the space shuttle Columbia disappeared. In the control room, contact was lost with the Columbia at 9:00 a.m. Minutes later, the Houston mission control room was locked down, as the team of ground support realized a disaster was occurring. By 2:05, President Bush addressed the public, “Columbia’s lost; there are no survivors”. The Columbia had disintegrated when it reentered the earth’s atmosphere. Seven astronauts were dead, including the first Israeli astronaut and an Indian astronaut, who immigrated to the United States. The sadness of the national disaster deepened as pieces of the Columbia shuttle began to turn up in Texas and Louisiana. A NASA internal investigation was conducted. In the wake of 9/11, theories of a terrorist attack surfaced and were quickly dispelled. The theory that a piece of foam may have damaged the wing was proposed. It was quickly dismissed too, as not being possible for the foam to cause such a catastrophic failure. As the pieces of Columbia were collected and the shuttle was reassembled, it was determined that a large piece of insulation foam broke off during launch, and hit Columbia’s left wing at a velocity of 500 mph. On reentry, the heat was too great for the damaged shuttle, causing it to disintegrate.
Figure 1. NASA’s vision and mission NASA Vision To improve life here, To extend life to there, To find life beyond.
NASA Mission “To understand and protect our home planet, To explore the universe and search for life, To inspire the next generation of explorers …as only NASA can.”
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
78 Guthrie & Shayo
Disturbingly, the foam was discussed at NASA internal mission meetings, but only in passing. An external review board of 10 people, led by Retired Admiral Hal Gehman, was appointed to investigate what happened at NASA that could have prevented the Columbia tragedy. In August of 2003, after conducting over 230 interviews with NASA personnel, from mechanics to astronauts, a 243-page report summarized the findings. Excerpts from the Columbia Accident Investigation Board (CAIB) Report are highly critical of NASA management and the culture of the agency: Based on NASA’s history of ignoring external recommendations, or making improvements that atrophy with time, the Board has no confidence that the space shuttle can be safely operated for more than a few years based solely on renewed post-accident vigilance. Unless NASA takes strong action to change its management culture to enhance safety margins in shuttle operations, we have no confidence that other “corrective actions” will improve the safety of shuttle operations. The changes we recommend will be difficult to accomplish — and they will be internally resisted. (CAIB Report, Vol. 1, p. 13) Changes that NASA has made include the removal of more than 12 people from upper management into different positions. At the onset of the disaster, several critics of NASA wanted to hold someone accountable for the management mistakes that led up to the disaster, including Sean O’Keefe, NASA’s president. NASA now has the task of reviewing and deciding what to do with the external investigation report.
SETTING THE STAGE
The engineering for the Space Shuttle Program began development in the 1970s. The purpose was to develop a cheaper way to access space. Previously, any launch vehicle was destroyed when it entered the earth’s atmosphere. American astronauts would parachute into the ocean in a heat-protected capsule. The military would seek and meet the capsule in the ocean and recover the astronauts. (The Russian astronauts parachuted onto ground, a harder landing.) Rockets, life support, communication and other subsystems were lost when the mission concluded. Having a launch vehicle that flew from space to earth had the potential to save money because these subsystems were reusable. The space shuttle has three main components: the orbiter, the external fuel tanks and the rocket boosters. The orbiter is the main reusable component, carrying the astronauts and any payload and all the necessary support subsystems. The fuel tanks supply power to the orbiter’s thrusters. The rocket boosters give the shuttle enough power to escape the earth’s gravitational pull. Once the shuttle is near orbit, the rockets are discarded and fall to earth. The rockets are recovered for use in future shuttle missions. The external fuel tanks are also detached from the orbiter in space, but are not recovered. Over its history, the shuttle program has had several milestones (see Appendix A-1) including the first woman in space (Sally Ride who also participated in
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
79
both Challenger and Columbia investigations), docking with the Russian Mir Space Station and the return to space of astronaut, now Senator John Glenn. In 1986, after the Challenger exploded, an investigation of the disaster by the Rogers Commission identified technical deficiencies that led to the explosion — the O-rings between portions of the right solid rocket motor failed. Surprisingly, engineers were aware of the potential problem O-rings, though the decision to launch was still made. There were systems used to track anomalies. However, as stated in the Rogers Report: NASA’s system for tracking anomalies for Flight Readiness Reviews failed in that, despite a history of persistent O-ring erosion and blow-by, flight was still permitted. It failed again in the strange sequence of six consecutive launch constraint waivers prior to 51-L, permitting it to fly without any record of a waiver, or even of an explicit constraint. Tracking and continuing only anomalies that are “outside the data base” of prior flight allowed major problems to be removed from and lost by the reporting system. (Rogers Commission, p. 148) The Rogers Commission also found several organizational problems that led to the decision to launch Challenger under dangerous conditions. A tremendous amount of planning takes place prior to a shuttle launch. In addition to planning the payload and planning for the scientific mission, several other activities take place. A delay in launching a shuttle disrupts the schedule for all the missions that follow. There are many stakeholders, including astronauts, NASA engineers and administrators, as well as engineers and managers form subcontracting organizations. During the Rogers Commission investigation, the structure of the organization was reported to be complex and non-inclusive of engineering and subcontractor viewpoints. An example of this is that Rockwell Inc. and Thiokol expressed reservations at the flight readiness meeting before the decision to launch. The report indicates in several instances that Marshall Space Center management pressured and influenced subcontractors to approve the launch decision, even though their support for launch was ambiguous at best. The major findings of the Rogers Commission, with excerpts from their final report, are presented in Table 1. Recommendations made by the Rogers Commission were both technical and managerial. The technical recommendations included: (a) redesigning the rocket boosters, (b) upgrading the shuttle tires and breaks, and (c) retrofitting the shuttles to include escape systems. Some of the managerial recommendations were: • • •
Create a strict risk-reduction program. Reorganize (decentralize) so that information is made available to all levels of management. Astronauts should be placed in NASA management positions so that their viewpoints are represented. All waivers to flight safety should be revoked and forbidden. All contractors need to agree to launch.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
80 Guthrie & Shayo
Table 1. Schedule Pressure/ Budget Pressure
“The pressure on NASA to achieve planned flight rates was so pervasive that it undoubtedly adversely affected attitudes regarding safety.” “Operating pressures were causing an increase in unsafe practices.” “Schedule pressures played an active role in the decision to launch the Challenger. Most emphasis in the program was placed on cost-cutting and meeting schedule requirements, rather than flight safety.”
Communication and Management Problems
The Commission stated that a “serious flaw” existed in the decision-making processes at NASA. The launch decision did not take into account or understand the concerns raised by the Thiokol engineers and some Marshall engineers. Further, waiving of launch constraints seemed to occur with no checks at necessary levels of management. “The Commission is troubled by what appears to be a propensity of management at Marshall to contain potentially serious problems and to attempt to resolve them internally rather than communicate them forward. This tendency is altogether at odds with the need for Marshall to function as part of a system working toward successful flight missions, interfacing and communicating with the other parts of the system that work to the same end.”
Silent Safety Program
• • •
During lengthy testimony, NASA’s safety staff was never mentioned. No safety personnel were present at the Mission Management team meetings or were part of the command structure for launch decisions. Additional problems of safety staff and requirements being reduced were reported. The organizational structure of Kennedy and Marshall space centers had safety and quality offices under the supervision of the organizations that they were supposed to check. This was a conflict of interest. Usually, organizations have safety and quality control departments report directly to senior management, independent from the pressures of individual programs.
Technical issues should be reviewed by independent government agencies, which report their analysis to NASA. The NASA Associate Administrator for Space Flight and the NASA Associate Administrator for Safety sponsor open reviews, encouraging NASA and contractor management, engineering and safety personnel to discuss concerns. Allowance be made for anonymous reporting of shuttle safety concerns.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
81
In response, NASA implemented the recommendations of the Rogers Commission and adopted a more realistic launch schedule. However, over time, it is difficult to assess what changes were fully adopted.
Culture in Government Agencies
Throughout the discussions of NASA management, the topic of organizational culture is prevalent. “Every organization has a culture, that is, a persistent, patterned way of thinking about the central tasks of human relationships within an organization. Culture is to an organization what personality is to an individual. Like human culture generally, it is passed on from one generation to the next. It changes slowly if at all” (Wilson, 1989). Adoption of information systems or changes to processes created by introduction of information systems are frequently doomed because the organization’s culture is unwilling to accept change. NASA was formed to re-establish U.S. dominance in space and science. Drawing from existing organizations and top research scientists, NASA became a high-performance organization (McCurd, 1993). In an amazingly short period of time, NASA was able to achieve the Apollo missions and become a source of national pride and prestige. The culture at the time, described by Vaughan (1996), was that of engineering. The very word “engineer” implies that something is being created and tested in order that it be engineered again and improved. NASA’s early culture of scientists and engineers relied on testing, research and rigorous methodology to find out what worked in achieving manned space flight. The culture believe strongly that an in-house technical ability was necessary, that people hired by NASA were the best and brightest in the world, and that risk and failure were part of doing business. Failure was tolerated because the view was that you cannot achieve success without innovation and experimentation. McCurd points out that high-performance cultures tend to be unstable and short lived. As the organization grew, and the amount of work increased, the culture underwent a significant change, becoming more bureaucratic. Instead of keeping all work “inhouse”, NASA began to use outside contractors to do research and development. The culture became more averse towards failure and innovation. Employees began to feel that their failures would not be tolerated. This was reflected in a NASA culture survey in 1988 (McCurdy, 1993) that showed employees were dissatisfied with the work going to outside contractors and were finding it difficult to get things done in a large bureaucracy. They felt that a loss of technical knowledge was occurring and that failure was no longer tolerated.
Information Systems Development for Government Agencies
In 1986, information systems used in organizational decision making were very different from the Web based, graphical systems we have today. Systems ran on a mainframe computer and were accessible to users through command-driven or primitive, menu-driven interfaces. Information systems were slow, not user-friendly and required training to learn how to operate. Commercial, mass-produced software (Commercial Off The Shelf [COTS]) was very minimal. If an agency like NASA wanted programs developed for them, it was customized software that was expensive and time consuming to create. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
82 Guthrie & Shayo
Today, software development methodologies offer prototyping and iterative design so that users can identify problems in the system before they are developed. In 1986, government agencies followed a strict software methodology (DoD-STD-2167A, 1995; see Appendix A-2) that defined all the software requirements at the beginning of the life cycle and then delivered a system, often years later, based upon written requirements, difficult for the user to visualize. Governmental systems development contracts were notorious for being overrun, late and not meeting the needs of the users. Further, systems developed for agencies like NASA might be high risk and require high reliability. To achieve high reliability and fault tolerance is very expensive. Today’s methodologies using rapid application development, prototyping and iterative design make it cheaper to produce these systems. However, they are still expensive because of the effort involved in ensuring reliability.
CASE DESCRIPTION Columbia’s Final Mission
Columbia’s final mission, STS-107, was to perform experiments related to physical, life and space sciences. The seven astronauts conducted more than 80 experiments while in orbit. This mission was an extended orbit, lasting 16 days. The crew was noted in the media because it had the first Israeli astronaut, Payload Specialist Ilan Ramon, quickly hailed as a national hero of Israel. See Appendix A-3 for a listing of Columbia’s crew and the STS-107 Mission. The mission goals are available in Appendix A-4. When Columbia launched on January 16, 2003, cameras caught images of insulation foam breaking loose from the fuel tank. Columbia entered its orbit over earth without incident and conducted its 16-day mission. On February 1, 2003, while returning to earth, Columbia lost communication with Johnson Space Center. After realizing that a disaster has occurred, Flight Director Leroy Cain order the communication center locked down and the contingency plan order is implemented. As the news reports begin, people from Texas to California report seeing the shuttle break up as it entered the atmosphere. Minimal talk of rescue fades as President Bush addresses the nation, “The Columbia’s lost. There are no survivors”. People begin finding debris from the shuttle in Texas and Louisiana. NASA issues a warning that the debris might be hazardous and that people should report any finding to NASA, and not touch the debris. NASA’s top administrator, Sean O’Keefe, vows to investigate what went wrong. The Columbia Accident Review Board (CAIB) is formed, chaired by retired Navy Admiral Harold Gehman, Jr. Members of the review board are listed in Appendix A-5. The possibility of a terrorist strike is offered and quickly rejected by NASA. As more is discovered about the Columbia disaster, the foam debris that broke loose on take-off is mentioned. However, Shuttle Program Manager Ron Dittemore says this is highly unlikely. A few days later, he retracts these statements. As the CAIB investigates, some surprising information about the NASA and the shuttle program comes to light.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
• • • •
83
NASA had known about the foam debris-shedding problem for some time. However, since it had never caused a problem before, it became routinely ignored. NASA had the opportunity to obtain images of Columbia in orbit on the day of the disaster, but declined, feeling that this was unnecessary. Budget restrictions led to demoralizing attitudes on the shuttle safety program. For example, NASA inspectors were required by management to supply their own tools and were restricted from making spot checks. Lower level shuttle personnel felt that they could not raise issues of quality and safety without risk of being fired.
With this in mind, the CAIB conducted more than 230 interviews of shuttle personnel at every level. In their final report, they begin their analysis of organizational culture by revisiting the history of NASA and the Challenger disaster 16 years earlier. The shuttle program organizational chart is listed in the Appendix A-6. The CAIB Final Report identifies three problem areas: physical flaws in design that led to the disaster, weaknesses in NASA’s organization and other significant observations. The report indicates that technical explanations were not enough to explain the Columbia disaster and is critical of NASA management: In our view, the NASA organizational culture had as much to do with this accident as the foam. Organizational culture refers to the basic values, norms, beliefs, and practices that characterize the functioning of an institution. At the most basic level, organizational culture defines the assumptions that employees make as they carry out their work. It is a powerful force that can persist through reorganizations and the change of key personnel. It can be a positive or a negative force. (CAIB Final Report, Part II, p. 1)
A Matter of National Pride
When Sputnik, the first satellite in space, was launched by the Soviet Union in 1957, Americans were stunned. Their Soviet rivals had beaten them into space. NASA was formed a year later and in 1961 rallied the American people to be the first to the moon. This was an unimaginable, engineering feat, and in 1969 NASA succeeded when Neil Armstrong became the first person to walk on the moon. NASA’s culture was defined through the spirit of exploration and achievement of the impossible. Those who witnessed the 1969 space walk clearly remember where they were and the awe inspired by the greatness of science and engineering achievement. Today, the shuttle program has made space flight seem common. Children do not remember a time when this was only a dream. Over time, the emphasis for space exploration has gone from achievement of the impossible to cost savings and streamlined operations. Figure 2 shows the NASA budget as a percentage of federal funding. A dramatic decline is evident in the early ’70s. Figure 3 shows the relatively flat spending over recent years on the shuttle and international space station programs. The CAIB contends that the NASA culture clashes with the shuttle philosophy of cheaper, frequent access to space. The Board goes further to state that the Rogers Commission recommendations made after the Challenger disaster were never fully realized.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
84 Guthrie & Shayo
Figure 2. NASA budget as a percentage of the federal budget from the CAIB Final Report
Percent of Federal Budget
4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5
2001
1998
1995
1992
1989
1986
1983
1980
1977
1974
1971
1968
1965
1962
1959
0.0
Figure 3. Space Shuttle Program spending in recent years (in millions of dollars) Space Station Space Shuttle
FY 1998 2501.3
FY 1999 2304.7
FY 2000 2323.1
FY 2001 2087.4
FY 2002 1721.7
FY 2003 1492.1
2922.8
2998.3
2979.5
3118.8
3272.8
3208.0
DELICATE BALANCE OF RISK, BUDGET AND SCHEDULE IN SPACE EXPLORATION
Only three years after the Challenger disaster, NASA was viewed as being overly cautious and expensive in adhering to the Rogers Commission. NASA cut the budget on the shuttle program by 21% over a three year period (1991-1994). Interestingly, the White House formed a blue-ribbon committee in 1990 to investigate NASA spending. The recommendations of the Augustine Committee were that NASA was overwhelmingly under-funded and needed a budget increase of 10%, per year. During this time, The White House appointed Daniel S. Goldin, a former aerospace industry executive, as director of NASA. Viewed as a change agent, his goal was “faster, better, cheaper”, with the hopes that streamlining efforts in NASA would result in an eventual manned mission to Mars. Goldin made dramatic changes to NASA, following a Deming-like management philosophy, seeking a less bureaucratic, more empowered worker approach to NASA. Goldin’s greatest contribution to NASA was possibly the creation of the International Space Station. Sean O’Keefe replaced Goldin in 2001. At this time, the current Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
85
debate was over privatization of NASA. The rationale was that privatizing portions of the shuttle program could save money and might facilitate more commercial uses of the shuttle. Parallel to this push was the effort to complete Node 2 of the International Space Station. The scheduled date, viewed widely as unrealistic, for deploying Node 2 of the International Space Station in February 2004 was programmed into a screen saver countdown and sent to all shuttle program managers. The CAIB made several recommendations, including a redesign of the thermal tank protection subsystem to eliminate debris shedding and several other technical enhancements. They also expressed the need for obtaining better images of the shuttle, on-board and from external sources. The need to perform emergency repairs of the shuttle from space was identified, along with the need to return to the industry standard for foreign object debris, which had been waived for the shuttle. The managerial recommendations were: • • • •
Ease scheduling pressure to be consistent with resources. Expand Mission Management Team crew and vehicle safety contingencies. Establish independent Technical Engineering Authority that oversees technical standards, risk, waivers, and verifies launch readiness. Reorganize the Space Shuttle Integration Office so integration of all elements of the program is possible.
Chapter 7 of the CAIB is dedicated to the organizational causes of the Columbia disaster. Normal Accident theory was used to describe the culture of NASA. Namely, in a complex, noisy organization, management actions can increase noise levels to a point where communication is ineffective. High Reliability Theory contends that organizations are closed systems where management is characterized by an emphasis on safety, redundant systems are seen as necessary, not costly and the organizational culture is reliability driven. The problems highlighted in the report were: • • •
A lack of commitment to a culture of safety: “... reactive, complacent and dominated by unjustified optimism”. Lack of adequate communication: Specifically, managers in charge were resistant to new information indicating what they did not want to hear. Additionally, the databases in place to support decision and data dissemination processes were difficult to use. Oversimplification: Foam strikes had occurred over 22 years and were viewed as a maintenance issue, not a safety problem.
Similarities between the organizational culture of the Challenger and Columbia disasters are striking.
PROBLEMS WITH INFORMATION TECHNOLOGY
As with the Challenger disaster, Columbia also had communication problems related to information technology. During the Challenger disaster, it became apparent Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
86 Guthrie & Shayo
that several memos were written indicating the O-ring problem was seen by Thiokol and NASA engineers as being dangerous. The technical detail in the memos was considered difficult to understand and focused on technical detail over risk. Safety systems in place during the Challenger disaster were lax, allowing for waivers, and failing to track waivers and anomalies across flights. Several analyses of the Columbia disaster point to communication through PowerPoint and difficult-to-use databases as contributors to the problems with the shuttle safety program.
PowerPoint
Given the technical nature of the Columbia disaster and all the complexity, technical and cultural, related to the launch decisions, reports from the CAIB related to PowerPoint presentations were quite remarkable: As information gets passed up an organizational hierarchy, from people who do analysis to mid-level managers to high-level leadership, key explanations and supporting information is filtered out. In this context, it is easy to understand how a senior manager might read this PowerPoint slide and not realize that it addresses a life-threatening situation. At many points during its investigation, the board was surprised to receive similar presentation slides from NASA officials in place of technical reports. The Board views the endemic use of PowerPoint briefing slides instead of technical papers as an illustration of the problematic methods of technical communication at NASA. (CAIB report, Vol. 1, p. 191) It seems that overuse of PowerPoint briefings, in place of detailed analysis, made it difficult for meeting attendees to identify what the launch risks were for Columbia. Edward Tufte, a highly prominent Yale professor and an expert in the visual display of data, gave a sample slide in the New York Times (Schwartz, 2003), showing how misleading and vague it was in conveying the risk of the foam strike. In his short book The Cognitive Style of Power Point (Tufte, 2003) criticizing Power Point and its use in organizations, Tufte gives examples of the communication failures of NASA’s presentation. For example, a slide title states, “Review of Test Data Indicates Conservatism for the Tile Penetration”. This could be construed to indicate no risk for foam tiles. However, the title applied to conservatism about the choice of models used for prediction. Only at the bottom of the slide, in a lower level bullet, is the important information related to the audience. Namely, “Flight condition is significantly outside the test database”. Tufte goes further to say that the low resolution of the slide (presumably to condense information) and the use of condensed, non-specific phrasing adds to the ambiguity of the communication. Note that in Visual Explanations (1997), Tufte did an analysis of the Challenger disaster, showing 13 view graphs prepared for management and faxed to NASA. The critique stated the charts were unconvincing and non-explicit in stating the impact of temperature on O-rings.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
87
Information Systems Used to Support Safety
The CAIB findings indicated critical problems with the information systems used to support shuttle safety: The information systems supporting the shuttle — intended to be tools for decision making — are extremely cumbersome and difficult to use at any level. While tools were in place to support safety decision making, the design and use was difficult, causing them to fall into disuse. In 1981, the PC was invented. The operating system, Disk Operating System (DOS), was command line driven and difficult to use. Windows 1.0 was released in 1985, but was viewed as slow and still difficult to use. In 1990, Windows 3.0 was released and graphical user interfaces (GUIs) gained massive popularity. GUIs were easy to use because the user did not have to remember any command names or sequences to operate the computer. The desktop analogy interface was intuitive to experiences users already had. Home use of PCs was rapidly rising. Consequently, customized, expensive systems developed for governmental agencies seemed archaic. A GUI design for safety that was easier to use and interpret, and that did not allow users to bypass safety features, may have led to a more informed decision. •
Another system existed that was easier to use, but not required. “The Lessons Learned Information System database is a much simpler system to use, and it can assist with hazard identification and risk assessment. However, personnel familiar with the Lessons Learned Information System indicate that design engineers and mission assurance personnel use it only on an ad hoc basis, thereby limiting its utility”.
Given a simple system was available, it was surprising that it was not organizational policy to use it. User training may be an issue here as well. • •
The simulation tool called Crater (mentioned in the slide in the above section) was inadequate to analyze foam impact data. The CAIB also indicated that the decentralized manner with which the shuttle program operated could hide unsafe conditions, whereas a centralized way to handle safety issues would foster better communication and insight.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION Can NASA Change?
Soon after the CAIB report was submitted, Brigadier General (and CAIB member) Duane Deal submitted a supplement to the CAIB report. Duane felt strongly that enough hadn’t been said about preventing “The Next Accident”. The language of the CAIB report was not strong enough in specific areas in its criticism of NASA management for Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
88 Guthrie & Shayo
letting schedules and budgets take precedence over safety. Brigadier Deal expressed strong reservations about NASA’s ability to change: History shows that NASA often ignores strong recommendations; without a culture change, it is overly optimistic to believe NASA will tackle something relegated to an “observation” when it has a record of ignoring recommendations.
Can NASA’s Culture be Transformed?
In the wake of the Columbia disaster, 12 top-level shuttle administrators and program managers were reassigned to different positions within NASA. In May of 2003, Former-Marine William Parsons replaced Dittemore. Previously, Parsons was the director of the Stennis Space Center and is viewed as a NASA insider with strong leadership skills. Over a transition period, Parsons continues to learn more about the shuttle program from the retiring Ron Dittemore. Now he has the daunting task of changing NASA’s culture. His first major assignment is to ensure that personnel at all levels feel comfortable voicing concerns about shuttle safety. In a recent quote to the Associated Press, Parsons claims, “None of this is too touchy-feely for me”. He even went so far as to hire a colleague whose assignment among others is to critique his interactions with subordinates to ensure he is not intimidating. Even so, NASA veterans are reluctant to adopt a more humanistic style of management. Will changes at the top level by Parsons be enough to successfully change NASA’s culture?
Information Technology
In this case, what is missing, is quite informative. Information technology offers advantages of being distributed or centralized, accessible, user friendly and can track anomalies, extending human abilities to see trends that they might otherwise miss. Failing to use a tool that can help you is akin to burying one’s head in the sand. Yet, we see it every day when users press the NEXT button or the OK button without reading the alert message on their personal computers. Designers have set up clever ways for technology to help us, but it seems like human nature to ignore that help. Ironically, many military systems are designed specifically with a “man-in-the-loop” to avoid catastrophic errors. After the Challenger disaster, the structure of the shuttle program was changed so that astronauts were managers and present at decision-making meetings. A proper safety system that did not allow people, in-the-loop, to bypass anomalies would have served the Columbia shuttle better. PowerPoint is part of business culture. At NASA, as with many organizations, PowerPoint presentations are interwoven with the structure and communication processes that support decision making. It is often said that “the devil is in the details”. PowerPoint is not a tool that is made to support a great deal of detail. Detailed information on a slide becomes an eye-chart, impossible to read. Lengthy reports with a lot of detail can unintentionally hide information by failing to display it in a way that catches the readers attention and overloading them with massive amounts of detail. Is reliance on Power Point to communicate ideas making NASA incapable of distinguishing key factors for decision making? Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
89
NASA’s Changing Goals
In January 2004, U.S. President George Bush addressed the public, outlining the new plan for Space Exploration. In his speech, he gave three goals: 1. 2. 3.
Complete the International Space Station. This will be the primary mission of the Space Shuttle. The Space Shuttle will be retired in 2010. Develop and test a new spacecraft by 2008. This will be the replacement for the shuttle and it will operate to transfer astronauts to and from the space station. The spacecraft will also carry astronauts beyond the earth’s orbit to “other worlds”. Return to the moon by 2020. Explore the moon robotically by no later than 2008.
President Bush also spoke about NASA’s current $86 billion budget, vowing to reallocate $11 billion within the budget and to increase NASA’s budget by $1 billion over the next five years. Bush also expressed his support and confidence that Sean O’Keefe could usher NASA into a new age of space exploration. In light of the small increases in funding, one NASA watcher, Douglas Osheroff, a member of the CAIB and a Stanford University physics professor in an interview with the Seattle Times was skeptical about Bush’s support for the future of the space program: If you give them a goal and you don’t give them resources, I think the situation will get worse. The question remains: How will O’Keefe find the resources to meet Bush’s goals and assure NASA is an organization with a culture of safety?
Mars Rovers
In early 2004, NASA’s Jet Propulsion Lab (JPL) in Pasadena, California, successfully landed two ruggedized rovers on Mars. The rovers, Spirit and Opportunity, are sending remarkable images of the Martian surface to earth. A poignant memorial, a plaque with the names of the seven, lost Columbia astronauts has been placed at Spirit’s landing site. The site has been named the Columbia Memorial Station. Opportunity’s landing site will be named for Challenger’s final crew.
REFERENCES
Associated Press. (2003). Columbia investigator wants more changes. Retrieved January 29, 2004, from http://www.foxnews.com/story/0,2933,95892,00.html Associated Press. (2003, February 1). Remains thought to be from Columbia Crew, NASA vows to find cause of shuttle disaster. Retrieved January 29, 2004, from http:// edition.cnn.com/2003/TECH/space/02/01/shuttle.columbia/ Associated Press. (2003, October 12). Touchy feely NASA-effort. Retrieved January 29, 2004, from http://www.wired.com/news/culture/0,1284,60798,00.html Augustine Committee. (1990, December). Report of the advisory committee on the future of the US Space Program. Retrieved January 29, 2004, from http://www.hq.nasa.gov/ office/pao/History/augustine/racfup1.htm Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
90 Guthrie & Shayo
Borenstein, S. (2004, January 31). A year after Columbia, NASA’s ‘culture’ reassessed. Retrieved January 29, 2004, from http://seattletimes.nwsource.com/html/nationworld/ 2001847903_nasa31.html CNN.com milestones in space shuttle history. (n.d.). Retrieved January 29, 2004, from http:/ /edition.cnn.com/interactive/space/0010/timeline.pop.up/frameset.exclude.html Columbia Accident Investigation Board. (2003, August 26). Report of Columbia Accident Investigation Board. Retrieved January 29, 2004, from http://www.nasa.gov/columbia/ home/CAIB_Vol1.html Deal, D. (2003, October). Supplement to the Report of the CAIB. Retrieved January 29, 2004, from http://www.caib.us/news/report/pdf/vol2/part00a.pdf Harwood, W. (2003, May 9). Parsons named Space Shuttle Program Manager. Retrieved January 29, 2004, from http://spaceflightnow.com/shuttle/sts107/030509parsons/ Harwood, W. (2003, July 13). Shuttle safety team was hamstrung. Retrieved January 29, 2004, from http://www.cbsnews.com/stories/2003/07/22/tech/printable564420.shtml McCurd, H. E. (1993). Inside NASA: High technology and organizational change in the US Space Program. Baltimore, MD: Johns Hopkins University Press. Mission Control Transcript of Columbia’s final minutes. (n.d.). Retrieved January 29, 2004, from http://datamanos2.com/columbia/transcript.html NASA Budget Reports. (1998-2003). Summary of the president’s FY budget request for NASA. Retrieved January 29, 2004, from http://www.nasa.gov/audience/formedia/features/MP_Budget_Previous.html NASA official Columbia page. (n.d.). Retrieved January 29, 2004, from http://www.nasa.gov/ columbia/home/index.html Rogers Commission. (1986, February 3). Report of the Presidential Commission on the Space Shuttle Challenger accident. Retrieved January 29, 2004, from http:// science.ksc.nasa.gov/shuttle/missions/51-l/docs/rogers-commission/table-ofcontents.html Schwartz, J. (2003, September). The level of discourse continues to slide. The New York Times. Tufte, E. R. (1997). Visual explanations: Images and quantities, evidence and narrative. Graphics Press, 38-53. Tufte, E. R. (2003). The cognitive style of PowerPoint. Graphics Press, 7-11. Vaughan, D. (1996). The challenger launch decision: Risky technology, culture and deviance at NASA. Chicago: University of Chicago Press. Wilson, J. Q. (1989). Bureaucracy: What government agencies do and why they do it. New York: Basic Books.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
91
Appendix A-1. Shuttle Program: Milestones Date
Description
Shuttle
Mission No.
4/12/81
Columbia is launched, marking the first mission into space of a reusable launch vehicle.
Columbia
STS-1
11/12/81
First Scientific Payload
Columbia
STS-2
11/11/82
Deployment of Commercial Communication Satellites
Columbia
STS-5
6/18/83
First Woman in Space, Sally Ride
Challenger
STS-7
8/30/83
First African American in Space
Challenger
STS-8
2/3/84
First Free Space Walk
Challenger
41-B
4/6/84
First Repair Satellite
Challenger
41-C
1/28/86
Challenger Explodes
Challenger
51-L
4/24/90
Hubble Space Telescope launched
Discovery
STS-31
12/2/93
Hubble Repaired
Endeavor
STS-61
6/27/95
Space Shuttle Docs with Mir
Atlantis
STS-71
10/29/98
John Glen flies on shuttle
Discovery
STS-95
10/10/00
100th Shuttle Mission
Discovery
STS-92
2/1/03
Columbia’s Last Mission
Columbia
STS - 107
(Adapted from Milestones in Space Shuttle History, available at http://edition.cnn.com/ interactive/space/0010/timeline.pop.up/frameset.exclude.html)
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Functional Baseline
re (
ware (CSC I) De velo pme nt
Detailed Design CSC Integration And Testing CSCI Testing
TRR
HWCI Testing
Development Configuration
Coding and CSU Testing
CDR
CDR
Fabrication
FCA
System Integration and Testing
FCA
Example of System Development Reviews and Audits from DOD-STD-2167A
Allocated Baseline
Soft
Design
PDR
Detailed Design
nt
PDR
pme
Preliminary Design
evelo
Software Requirements Analysis Preliminary
SSR
Hardware Requirements Analysis
System SRR System SDR RequireDesign ments Analysis
System Requirements Analysis/Design
wa Hard
I) D HWC
Product Baseline
PCA
PCA
FQR
Production and Deployment
Appendix A-2. Software Development Life Cycle from DOD-STD: 2167A SRR - System Requirements Review SDR - System Design Review SSR - System Specification Review PDR - Preliminary Design Review DR - Critical Design Review TRR - Test Readiness Review FCA - Functional Configuration Audit PCA - Physical Configuration Audit FQR - Formal Qualification Review
Reviews
Testing and Evaluation
92 Guthrie & Shayo
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
93
Appendix A-3. Columbia Mission STS-7 Crew Profiles from the NASA Columbia Rick D. Husband, Commander William C. McCool, Pilot Michael P. Anderson, Payload Commander David M. Brown, Mission Specialist 1 Kalpana Chawla, Mission Specialist 2 Laurel Blair Salton Clark, Mission Specialist 4 Ilan Ramon, Payload Specialist 1
Rick Husband, 45, a colonel in the U.S. Air Force, was a test pilot and veteran of one spaceflight. Selected by NASA in December 1994, Husband logged more than 235 hours in space. William C. McCool, 41, a commander in the U.S. Navy, was a former test pilot. Selected by NASA in April 1996, McCool was making his first spaceflight. Michael P. Anderson, 43, a lieutenant colonel in the U.S. Air Force, was a former instructor pilot and tactical officer. Anderson logged over 211 hours in space. David M. Brown, 46, a captain in the U.S. Navy, was a naval aviator and flight surgeon. Selected by NASA in April 1996, Brown was making his first spaceflight Kalpana Chawla, 41, was an aerospace engineer and an FAA Certified Flight Instructor. Selected by NASA in December 1994, Chawla logged more than 376 hours in space. Laurel Clark, 41, was a commander (captain-select) in the U.S. Navy and a naval flight surgeon. Selected by NASA in April 1996, Clark was making her first spaceflight. Ilan Ramon, 48, a colonel in the Israeli Air Force, was a fighter pilot who was the only payload specialist on STS-107. Approved by NASA in 1998, he was making his first spaceflight.
(http://www.nasa.gov/columbia/crew/index.html)
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
94 Guthrie & Shayo
Appendix A-4. STS-107 Mission Overview from Columbia STS-107 Mission Summary STS-107 Flight: January 16-February 1, 2003 Crew: Commander Rick D. Husband (second flight), Pilot William C. McCool (first flight), Payload Specialist Michael P. Anderson (second flight), Mission Specialist Kalpana Chawla (second flight), Mission Specialist David M. Brown (first flight), Mission Specialist Laurel B. Clark (first flight), Payload Specialist Ilan Ramon, Israel (first flight) Payload: First flight of SPACEHAB Research Double Module; Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR); first Extended Duration Orbiter (EDO) mission since STS-90. This 16-day mission was dedicated to research in physical, life, and space sciences, conducted in approximately 80 separate experiments, comprised of hundreds of samples and test points. The seven astronauts worked 24 hours a day, in two alternating shifts. First flight: April 12-14, 1981 (Crew John W. Young and Robert Crippen) 28 flights 1981-2003. Most recent flight: STS-109, March 1-12, 2002 Hubble Space Telescope Servicing Mission Other notable missions: STS 1 through 5, 1981-1982 first flight of European Space Agency built Spacelab. STS-50, June 25-July 9, 1992, first extended-duration Space Shuttle mission. STS-93, July 1999 placement in orbit of Chandra XRay Observatory. Past mission anomaly: STS-83, April 4-8, 1997. Mission was cut short by Shuttle managers due to a problem with fuel cell No. 2, which displayed evidence of internal voltage degradation after the launch.
(http://www.nasa.gov/columbia/mission/index.html)
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
95
Appendix A-5: Columbia Accident Review Board Members from CAIB Web Site Chairman of the Board Admiral Hal Gehman, USN Board Members Rear Admiral Stephen Turcotte, Commander, Naval Safety Center Maj. General John Barry, Director, Plans and Programs, Headquarters Air Force Materiel Command Maj. General Kenneth W. Hess, Commander, Air Force Safety Center Dr. James N. Hallock, Chief, Aviation Safety Division, Department of Transportation, Volpe Center Mr. Steven B. Wallace, Director of Accident Investigation, Federal Aviation Administration Brig. General Duane Deal, Commander, 21st Space Wing, USAF Mr. Scott Hubbard, Director, NASA Ames Research Center Mr. Roger E. Tetrault, Retired Chairman, McDermott International, Inc. Dr. Sheila Widnall, Professor of Aeronautics and Astronautics and Engineering Systems, MIT Dr. Douglas D. Osheroff, Professor of Physics and Applied Physics, Stanford University Dr. Sally Ride, Professor of Space Science, University of California at San Diego Dr. John Logsdon, Director of the Space Policy Institute, George Washington University Board Support Standing Support Personnel Reporting to the Board Ex-Officio Member: Lt. Col. Michael J. Bloomfield, NASA Chief Astronaut Instructor Executive Secretary: Mr. Theron Bradley, Jr., NASA Chief Engineer
(http://www.caib.us/board_members/default.html) Notes: (1) Sally Ride was also on the Rogers Commission. (2) The review board make up was changed twice to ensure proper representations from different constituencies.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
96 Guthrie & Shayo
Appendix A-6. Space Shuttle Program Organizational Structure, from CAIB Final Report Space Shuttle Program NASA Organization
Administrator
Human Exploration & Development of Space Associate Administrator International Space Station and Space Shuttle Programs Deputy Associate Administrator
Space Shuttle Program Office Manager, Space Shuttle Program (SSP) Manager, Launch Integration (KSC) Manager, Program Integration
Manger, SSP Safety and Mission Assurance Manger, SSP Development Manager, SSP Logistics (KSC)
Space Shuttle S&MA Office
Space Shuttle Administrative Office
Space Shuttle Management Integration Office
Space Shuttle Business Office (SFOC COTR)
Space Shuttle KSC Integration Office
Space Shuttle Processing (KSC)
Space Shuttle Systems Integration Office
Space Shuttle Customer and Flight Integration Office
Space Shuttle Projects Office (MSFC)
Space Shuttle Vehicle Engineering Office
Missions Operations Directorate
Flight Crew Operations Directorate
Extravehicular Activity
Solid Rocket Booster (SRB) Office
Reusable Solid Rocket Motor (RSRM) Office
Space Shuttle Main Engine (SSME) Office
External Tank (ET) Office
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Columbia Disaster
97
Ruth Guthrie is a professor of computer information systems at California Polytechnic University, Pomona. She has experience in systems engineering, software test and program management of space-based IR sensor programs. She has a PhD from Claremont Graduate University, an MS in statistics from the University of Southern California, and a BA in mathematics from Claremont McKenna College. Her research interests are user interface design and computer ethics. She has authored several papers in a variety of areas including two books on Web development. Currently, she is associate director for AACSB for the College of Business at Cal Poly and is involved in several Web development efforts using video embedded flash. Conrad Shayo is a professor of information science at California State University, San Bernardino. Over the last 23 years he has worked in various capacities as a university professor, consultant, and manager. He holds a doctor of philosophy and a master’s of science in information science from the Claremont Graduate University, formerly Claremont Graduate School. He also holds an MBA in management science from the University of Nairobi, Kenya, and a bachelor of commerce in finance from the University of Dar-Es-Salaam, Tanzania. His research interests are in the areas of IT assimilation, performance measurement, distributed learning, end-user computing, organizational memory, instructional design, organizational learning assessment, reusable learning objects, IT strategy and “virtual societies”.
This case was previously published in the Journal of Cases on Information Technology, 7(3), pp. 5776, © 2005.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
98 Mahmood, Mann, & Dubrow
Chapter VI
AMERIREAL Corporation:
Information Technology and Organizational Performance Mo Adam Mahmood University of Texas at El Paso, USA Gary J. Mann University of Texas at El Paso, USA Mark Dubrow University of Texas at El Paso, USA
EXECUTIVE SUMMARY
This instructional case, based on an actual firm’s experience (name changed), is intended to challenge student thinking with regard to the extent to which information technology (IT) can demonstrably contribute to organizational performance and productivity and to which users of IT can relate their investment decisions to measurable outcomes. Relationships between an organization’s investment in IT and the effect of such investments on the organization’s performance and productivity have long been the subject of discussion and research. Managers, interested in knowing the “payoff” of such investments, are continually seeking answers to this question. A failure to understand the benefits of IT investment, or an over- or under-estimation of the benefits of a planned investment in IT relative to the costs, will likely result in less than optimal investment decisions.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
AMERIREAL Corporation 99
BACKGROUND
Real estate is a natural and therefore limited resource. The total value of U.S. real estate has grown from the time of the island of Manhattan purchase for $24 to a current day $3 trillion. Whether used for commercial, residential, or federally protected use, land is a commodity that has experienced phenomenal growth over the past three decades, albeit with temporary setbacks. The majority of the world’s land remains free of human habitation. It lies in the same natural state as it has since the creation of the earth. However, following a decline in the 1980s, the portions that have been developed for human habitat are experiencing a renaissance in value and, for property owners, increased earnings. As an industry, real estate has been viewed as a fragmented sector of commerce. Ownership was, and in many ways still is, an unsecured risk. As stated in Forbes magazine (December 29, 1997), “most of the commercial real estate in the U.S. is owned by private groups or individuals…..Somewhere between $2 trillion and $3 trillion worth, and very little of it publicly owned”. The real estate bandwagon has not always been so robust. During the early 1980s realty magnates consumed everything they could get their hands on. The cost of capital was low and banks and financial lenders were more than willing to loan cash or provide credit for such investments. Savings and loans institutions were multiplying and growing as fast as deals were brought to their loan officers. The U.S. economy was on a roll and real estate lending was unstoppable. However, all that glitters is not gold. By the midto late 1980s the economy had begun to deteriorate. Over-leveraged financial institutions were faced with declining profits. The once high-flying financial markets were showing signs of correction. Savings and loans were filing for bankruptcy and the federal government was called upon to bail out the millions of dollars in worthless bonds that had flooded the American economy a few years earlier. Perhaps no other industry better characterizes the decline of the U.S. economy during the 1980s than that of real estate. Potential investors in real estate were weary of the irresponsibility of banks in lending capital to develop shallow development deals and transparent structures. Real estate, ranging from land development to shopping centers, hotels, and apartment complexes, collapsed under the weight of its own rapid growth and over saturation. No one was interested in backing new deals, and it would take years of increased demand before the real estate market would begin to recover.
Real Estate Investment Trusts
President Dwight D. Eisenhower signed the Real Estate Investment Trust Tax Provision into law in 1960. The purpose of the provision was to motivate investment in the U.S. real estate market by granting preferential tax treatment to Real Estate Investment Trusts (REIT), thus allowing greater returns on investments. In order to qualify as a REIT, a firm was required to meet stringent taxable income rules. The optimal benefit of the REIT was that no portion an organization’s net income was taxable, either at the local or federal level. However, to maintain REIT status, companies were required to distribute all earnings as dividends. Since REITs were not subject to a corporate tax, the shareholders’ distributions were larger than that of a standard publicly held corporation.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
100 Mahmood, Mann, & Dubrow
Creation of the AMERIREAL Corporation
The formation of AMERIREAL Corporation launched an unprecedented approach to corporate real estate ownership. Prior to that time, the majority of real estate companies and REIT’s were privately held and lacked long-term vision. Owners were in the real estate market for a quick return and consequently paid little or no attention to long-term stability and shareholder value. AMERIREAL’s founder, Bob Stillman, believed in the value and discipline of securitizing the real estate enterprise. His approach was to establish publicly held companies that were accountable for their actions and guided by strong management and impartial boards of directors. AMERIREAL Corporation began operations in 1988. Its mission was to become the preeminent provider of real estate research, investment, and management of operating companies. By the end of 1988, AMERIREAL owned over two million shares of True Value Trust (TVT). TVT was a REIT dedicated to luxury residential housing throughout the Midwestern U.S. AMERIREAL had purchased the shares of TVT at a cost of $7.79 per share. According to AMERIREAL’s 1988 Annual Report, on December 31, 1988, the closing price on the New York Stock Exchange for TVT was $10.50 per share, resulting in an increase in AMERIREAL’s net worth of $24 million. Its strategy had begun to take shape and was further supported by another development: increased equity ownership in real estate. Equity ownership in real estate companies had begun to rise in 1990. Given this rise, AMERIREAL continued to venture into long-term affiliations with a variety of firms. The most noteworthy of the affiliates were True Value Trust, Standard Commercial, and Windsor, Inc. All of the affiliated companies were publicly traded organizations, with AMERIREAL a significant shareholder in each. The holdings of AMERIREAL and its affiliates included various types of properties, such as apartments, assisted living facilities, extended stay lodgings, office and retail properties, and others. As part of the overall group structure, AMERIREAL contributed resources and administrative support to help grow the companies. By 1996 AMERIREAL had amassed a combined equity market capitalization in excess of $11.14 billion. Management had assembled a superior team of operating and investment professionals to carry out the firm’s strategy.
SETTING THE STAGE AMERIREAL’S Information Technology Posture
The real estate sector was historically an industry that had not generally utilized technical equipment or systems. Further, the industry had been slow to invest in technology in large part because of the passive nature of revenue growth after the crash of the 1980s. Thus, through 1996, AMERIREAL had not viewed information technology as an important aspect of its operating performance. However, the company began to recognize that it could distinguish itself from other firms in the industry by addressing its client needs through investment in high-speed telecommunications, Internet home pages, and e-applications for improved business processing. Thus, beginning in 1997, AMERIREAL began to substantially increase its information technology (IT) capability, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
AMERIREAL Corporation 101
hiring a large number of MIS professionals and engaging systems consultants. The MIS group began to form a long-term vision that would help launch AMERIREAL into the next millennium. AMERIREAL, seeing itself buried in paper-based systems for the various accounting and administrative functions, began to install proprietary technical systems, such as accounts payable and timekeeping systems. These gave the firm the capability to access, process, and communicate accounts payable and payroll information to and from any company location in the country. In another adoption of technology, the True Value Trust affiliate decided to commit over $2 million to upgrade its telephone lines to cable lines to satisfy the demand for Internet access. This allowed TVT to increase client rental fees and revenues by a substantial amount. The company also began to consider the feasibility of combining some administrative functions to take advantage of economies of scale. Up to this time, each of the affiliates had operated on an independent basis. That is, each provided its own services such as Human Resources, Accounts Payable, Tax Department, and MIS on the premise that the needs of each would be best served by decentralized control of these resources. AMERIREAL in time came to conclude that this was not necessarily an efficient approach, and with approval from each company’s board of directors, a Shared Service Center (SSC) was introduced. The purpose within the MIS group was to leverage the knowledge of all the groups into a singularly focused organization. The effect was to gain knowledge from each company and to deploy resources that would be mutually beneficial to all companies. In order for each company to have an equal voice in the deployment of IT resources, the affiliates each created an Information Sharing Council (ISC). The ISCs, comprised of various department heads, were responsible for prioritizing the business systems requirements, performing a cost/benefit analysis for each potential project, and submitting the requirements to the centralized IT Department. Each organization’s ISC team was comprised of representatives from Executive Management, Finance/Accounting, Operations, Development, and Sales/Marketing. Each functional area of the company brought unique needs to the group. Systems projects varied from transactional process enhancements to strategic competitive improvement projects.
Information Technology Initiatives and Company Expectations
As the various IT initiatives emerged from the ISC teams, the prevailing requirement was to streamline mundane and routine processes such as point-of-sale data entry into the company ledgers and operational and development information for financial statement preparation. These changes were designed to reduce or eliminate manual processing and decrease the cycle time for reporting end-user performance results. Each affiliate gauged the success of these changes by the deliverable timelines and whether the IT Department could deliver on time and on budget. These two elements had historically been lacking in IT project management. Expectations for these various IT projects ranged from increased net profit to sustaining current growth trends. As noted by the CEO of Windsor, Inc., “ the efficiencies gained by implementing a new Property Management System at each hotel is expected to result in a reduction of administrative costs. The effect of this reduction should flow directly to the bottom-line”. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
102 Mahmood, Mann, & Dubrow
Determining IT Deployment
One of the most difficult tasks assigned to the chief information officer (CIO) was the deployment of IT resources to various projects. The CIO and his team developed a Critical Path Priority Ranking System to help deliver the necessary resources at the appropriate time for each project. In addition to utilizing internal resources, the IT Department used either external consultants to supplement its efforts, or completely outsourced portions of individual projects. The final resource solution was to segregate the IT Department into Customer Service groupings. As part of the Shared Service Center vision, each company effectively purchased the IT services. In order to assure an arms length deal for this purchase, the IT Department had to determine an appropriate pricing structure to administer its services. This was a critical element since each of the AMERIREAL companies was publicly traded and shareholders’ interest could not be compromised. With the help of a business partner, Real Time-Real Costs, the IT Department decided to use actual costs as captured by TimeSys to adequately price the time and material used in supporting each affiliate. TimeSys is an automated time clock system that tracks the actual time, travel, and material costs used by each IT professional assigned to a specific project. With all the elements in place, AMERIREAL was poised to enter the future fully embracing IT. The company, however, was also very interested in determining, if possible, how IT investment was contributing to company performance and productivity. To investigate the possibility of a relationship between IT investment and company performance and productivity, data was gathered for the years 1996, 1997, and 1998.
CASE DESCRIPTION Evaluation of AMERIREAL’s IT Investment
AMERIREAL’s CIO Larry Price had read existing literature on research into the relationships between investment in IT and its effect on organizational performance and productivity (e.g., Loveman, 1994; Mahmood & Mann, 1993). Based on these readings, he decided to classify company data into two groups: IT investment data and company performance and productivity data. IT investment was represented by four variables: IT budget as a percentage of total revenue; percentage of the IT budget allocated for IT staff, percentage of the IT budget dedicated to training, and market value of the company’s IT as a percentage of annual revenue. Performance was represented by two variables, growth in revenue and return on investment. Finally, productivity was measured as sales per employee and sales by total assets. Figure 1 presents the results of the analysis of relationships between IT investment in a given year and performance and productivity that same year, as well as changes between years. IT investment data for 1996 was indicative of the fact that AMERIREAL had not yet focused on the use of IT as an integral aspect of management. The IT budget as a percentage of revenues was only 0.5%. The percentage of this rather limited IT budget spent on IT staff was 44%, and the percent of IT budget spent on training was 2.8%. The market value of IT as a percentage of revenue was 1%. Performance and Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
AMERIREAL Corporation 103
Figure 1. IT investment and performance and productivity data Year of Record IT Budget for Staff IT Dept. Budget # of total employees Select Investment Variables IT Budget to Revenue Percent IT budget for Training Market Value of IT to Revenue Select Annual Assets Market Return
Performance Variables Revenue Value on Investment
Select Productivity Variables Sales per FTE Sales by Total Assets
1996 44% $1,200,000 1,555
1997 63% $3,000,000 2,099
1998 66% $6,800,000 2,519
0.5% 2.8% 1.0%
0.9% 14.1% 1.3%
1.3% 14.8% 1.0%
1996 1997 $240,500,000 $350,000,000 $2,900,000,000 $4,000,000,000 $344,022,000 $788,420,000 33.5% 30.2%
1998 $505,000,000 $5,350,000,000 $1,456,451,000 34.7%
1996 154,662 8.3%
1997 166,746 8.8%
1998 200,476 9.4%
1996 IT Dept. Budget $1,200,000 Annual Revenue $240,500,000 Growth Year over Year in IT Budget Growth Year over Year in Revenue
1997 $3,000,000 $350,000,000 $1,800,000 $109,500,000
1998 $6,800,000 $505,000,000 $3,800,000 $155,000,000 $264,500,000
1996 vs. 1997 $91.25
1997 vs. 1998 $51.67
1996 vs. 1998 $220.42
Dollar Spent on IT and Revenue Growth
productivity data for 1996 revealed a return on investment of over 33%. The 8.3% sales by total assets suggested the firm utilized its assets well in producing income. This would be consistent with a more or less typical real estate organization because the revenue generating assets were multifamily apartments, industrial warehouses, hotels, and other facilities that raise funds through operations. The $154,000 in sales per employee was comparable to firms that manage revenue-generating facilities with limited staff and a proportionately sized corporate overhead team. By 1997 AMERIREAL had begun to strengthen its IT capability, as evidenced by an increase in IT budget to $3 million. This enabled an increase in the IT staff budget of 19 percentage points, to a total of 63%, and an 11-percentage point increase, to 14.1%, in expenditures for training of IT Department staff. Performance and productivity measures reflected a growth in revenue of $109.5 million. In addition, sales per employee and sales by total assets each increased. AMERIREAL continued to grow at a rapid pace in 1998. The IT investment data demonstrated that the company was continuing its commitment to technology as a means of remaining competitive. The IT budget as a percentage of revenue increased to 1.3%, representing a doubling of investment from $3 million to over $6 million, and included such MIS projects as Property Management Systems, accounts payable automation, and a new company-wide core financial system. The percentage of IT budget spent on staff Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
104 Mahmood, Mann, & Dubrow
and the percentage of IT budget spent for training increased slightly, while of course increasing even more in absolute amount as components of the larger total IT budget. There was also an important relationship between IT investment and growth in revenue. For each dollar spent on IT in 1996, revenue increased $91 in 1997; for each dollar spent on IT in 1997, revenue increased about $52 in 1998. Except for return on investment, all performance and productivity variables indicated improvements across the several years involved. Return on investment dropped between 1996 and 1997, but increased again in 1998. The CIO felt that these results strongly suggested the probability of a relationship between increased outlays for IT in one year and improved performance and productivity the following year(s).
Was IT Investment a Factor in AMERIREAL’S Performance and Productivity?
James Fulton, AMERIREAL Chief Operating Officer, and Larry Price, CIO, were discussing the analysis that Price had prepared. Price was taking the position that the analysis did indeed suggest a strong relationship between IT investment and company performance and productivity. While admitting that the numbers did seem to relate, Fulton had reservations as to cause and effect. He suggested that other factors be considered before coming to any definite conclusions. One specific item he had in mind was the fact that between 1996 and 1998 AMERIREAL’s assets increased by $2.45 billion. Perhaps this, rather than IT improvements, was the actual cause for the increased performance and productivity. Since real estate was AMERIREAL’s revenue generator, increases in real estate assets should yield increases in revenues: the more apartments, hotels, and industrial centers the company owns, the more rent which should be collected from tenants. In reviewing the high return on investment in 1998, Fulton recalled that the industry average annual return for REIT stocks was over 24%. Considering that AMERIREAL’s return on investment was 34%, he couldn’t entirely rule out the possibility that IT might have been a contributor. However, he did tend to feel that the difference between the national average and AMERIREAL’s return on investment might be due more to improved operations of the facilities. By improving the apartments, hotels, and industrial centers, AMERIREAL was able to increase rates and maintain profit margins. A leader in the industry, brand recognition allowed AMERIREAL to charge a premium for its facilities. While understanding Fulton’s arguments, but believing strongly in the relationship between IT investment and company performance and productivity, Price decided to approach the discussion from another viewpoint. He believed that the primary technology systems used in a highly distributed real estate company, such as AMERIREAL, were threefold. First, the operating companies relied on individual property management systems to collect, record, analyze, and distribute data on each of their tenants and guests. Second, data transmission was used to transmit the property management system information to a centralized database. AMERIREAL companies relied on 56k modem lines to transfer data from over 500 property locations into their financial center. The final technology system used by each operating company was the core financial system. In the case of AMERIREAL, each operating company used a separate platform that contained a general ledger, accounts payable, accounts receivable, billing, and job Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
AMERIREAL Corporation 105
costing database. A manual process was used to consolidate each company’s results into a final financial statement. Although the three primary technology systems were used to support operations and distribute data, Price believed that management’s usage of the data for decision making was contributing to improved operational performance. For example, one of AMERIREAL’S companies, Windsor Inc., an extended-stay hotel company, utilized its property management system to control room inventory and adjust room rates during historically slow periods. This process was most noticeable during the month of December. By utilizing data stored in the property management system, Windsor was able to manage its room inventory and increase revenue per available room by 11% from 1997 to 1998. Additionally, Windsor’s financial management team utilized the data derived from the general ledger system to determine whether commissions paid for credit card processing exceeded the company forecast. By extracting information from the property management systems it was determined that the percentage of guests paying by credit card increased by nearly 20% from 1996 to 1998. Based on this information, Windsor was able to renegotiate the credit card processing fees and reduce annual operations costs by half a million dollars. Price also believed that technology could play a critical role in generating future revenue by linking travel agents to Windsor’s reservation system. Through 1997, Windsor had relied exclusively on local direct sales and marketing efforts by the property managers. Although Windsor’s average occupancy was high at 74.7% for 1997, Price had lobbied for direct access for travel agents to make reservations and help subsidize the local efforts. This global approach would increase revenue and improve operating efficiencies. The front-end costs associated with linking the travel agents to Windsor’s reservation system were in excess of $200,000. Additionally, Windsor would pay 10% of the first seven nights room revenue to the travel agencies as commissions. Fulton, however, was reluctant to embark on the open travel agency system, know as NetRez. He believed that the local sales efforts were the most appropriate for the company and did not want to tamper with a successful approach. However, after numerous reviews and evaluations, he did agree to implement the NetRez project. At the end of 1998 Fulton asked Price to perform a financial evaluation of the success of NetRez. By analyzing Windsor’s annual operating costs, Fulton concluded that these increased by $1.7 million as a result of commissions paid to travel agencies. He was unsure of the return of the investment of NetRez and requested that Price provide the data to support what value the system brought to Windsor. Price analyzed the data contained within the property management systems and determined that $11,000,000 of incremental revenue was generated by NetRez. Although the operating income did indeed increase in excess of $80,000,000 from 1997 to 1998, Fulton believed that the increase was a result of additional operating properties, and not a result of NetRez. Conversely, Price argued that comparing the total amount of revenue year over year was too limiting and believed that the most appropriate financial indicator was revenue per available room. Statistically, Windsor’s revenue per available room had grown by 12% from 1997 to 1998. Fulton agreed that this increase did occur but was not convinced that it was a result of NetRez. He believed the increase was attributable to improved market share and increased room rates. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
106 Mahmood, Mann, & Dubrow
It became clear to Price that Fulton, as an executive officer, was focused on financial concerns, development opportunities, and business partnerships. Price decided to approach Fulton from another angle. Price felt that limiting the analysis of the benefits of IT to primarily financial measures was too restrictive and actually masked IT’s contributions. He suggested that, instead of focusing on only financial measures such as return on investment, growth in revenues, and so on, AMERIREAL should begin to use the Balanced Scorecard methodology (Kaplan & Norton, 1992) in evaluating the impact of IT. Specifically, Price suggested that the contributions of IT be evaluated in four areas: (1) the measurable benefits of the reduction in operational costs; (2) the improvements in staff productivity, such as marginal improvements in cycle times; (3) the costs avoided in such functions as recruiting, training, external consulting support, and reduced turnover; (4) and, “soft” benefits such as increased staff knowledge, project collaboration, and sharing of ideas. To make his point, Price used as an illustration AMERIREAL’s recent investment in an Enterprise Resource Planning (ERP) System. Price reminded Fulton that AMERIREAL had invested nearly $10 million in an ERP. The addition of this fully integrated ERP system had helped AMERIREAL achieve a competitive advantage in the real estate market. After working with the vice president, accounting, Price constructed a productivity chart (Figure 2) for the Accounting Department. One of AMERIREAL’s objectives had been to reduce the number of accountants required to service the properties. Since AMERIREAL utilized a Shared Service Center, Price believed that the ERP system would allow AMERIREAL to be more productive with fewer resources. He prepared Figure 2 in an attempt to prove to Fulton that technology was in fact contributing to the bottom line. In Figure 2, productivity was defined as the number of properties for which each accountant was responsible. As Price pointed out, before the ERP system was implemented in January, 1997, each accountant had handled five properties. After 18 months of ERP system operation, each accountant was capable of handling nearly 10 properties. The reduction in costs was in excess of $700,000. Fulton had not previously realized that the ERP system had resulted in an actual reduction in costs. He had considered the ERP to be only a benefit to the IT staff, a qualitative improvement rather than something tangible that impacted his overall financial performance. Although he was not convinced that IT was a primary factor in AMERIREAL’s success, he was nevertheless impressed with the information presented by Price. Fulton departed the meeting clearly believing that, in at least this once instance, IT had been a contributing factor to AMERIREAL’s success.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
Price had been successful, in this one instance of an ERP application, in devising a method for demonstrating the contribution of IT to company performance and productivity. However, almost immediately following his meeting with Fulton, he began to sense the challenge that he faced in providing other such examples in order to further convince management of the value of IT investment. After all, the more proof he could provide of returns on such investment, the more likely management would be to make Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
AMERIREAL Corporation 107
Figure 2. Productivity contribution of ERP system
Amerireal Property Accounting Productivity/Cost Comparison January 1997 - June 1998
120% 100%
1,600,000
Productivity
1,400,000
Payroll Costs
1,200,000
80%
1,000,000
60%
800,000 600,000
40%
400,000
20%
200,000
0%
0
1/1/97
6/30/97
8/1/97
9/1/97
11/15/97
1/1/98
5/1/98
additional investments in IT, something that Price considered essential to continued company success. On the other hand, if he could provide only limited evidence of the payoff of IT investment, further IT budget increases would probably not be so easily obtained. His potential dilemma was this: He had to find meaningful methods for measuring the benefits of most, if not all, of the various IT applications. Can the contribution of all IT applications be measured somehow? For instance, can cost reductions expected to result from replacing telephone systems and answering agents with a Web site actually be quantified? Or, can the anticipated revenue and cost benefits of moving to an electronic commerce Web site be measured? Alternatively, should some evaluations be qualitative rather than quantitative? If so, how is the contribution of IT assigned when dollars or other numbers cannot be logically generated? As he left his office for the day, Price realized there were no easy solutions to the problem.
REFERENCES
Kaplan, R. S., & Norton, D. P. (1992, January-February). The balanced scorecard — Measures that drive performance. Harvard Business Review, 71-79. Loveman, G. W. (1994). An assessment of the productivity impact of information technologies. In T. J. Allen, & M. S. Morton (Eds.), Information technology and the corporation of the 1990s (pp. 84-110). New York: Oxford University Press. Mahmood, M. A., & Mann, G. J. (1993). Measuring the organizational impact of information technology investment: An exploratory study. Journal of Management Information Systems, 10(1), 97-122.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
108 Mahmood, Mann, & Dubrow
FURTHER READING
Brynjolfsson, E. (1993). The productivity paradox of information technology. Communications of the ACM, 36(12), 67-77. Kivijarvi, H., & Saarinen, T. (1995). Investment in information systems and the financial performance of the firm. Information and Management, 28, 143-163. Lorin, M. H., & Brynjolfsson, E. (1997). Information technology and internal firm organization: An exploratory analysis. Journal of Management Information Systems, 14(2), 81-101. Loveman, G. W. (1994). An assessment of the productivity impact of information technologies. In T. J. Allen, & M. S. Morton (Eds.), Information technology and the corporation of the 1990s (pp. 84-110). New York: Oxford University Press. Mahmood, M. A., & Mann, G. J. (1993). Measuring the organizational impact of information technology investment: An exploratory study. Journal of Management Information Systems, 10(1), 97-122. Mahmood, M. A., & Mann, G. J. (Guest Eds.). (2000). Impacts of information technology investment on organizational performance [Special issue]. Journal of Management Information Systems, 16(4). Mitra, S. A., & Chaya, A. K. (1996). Analyzing cost-effectiveness of organizations: The impact of information technology spending. Journal of Management Information Systems, 13(2), 29-57. Raymond, L. (1990). Organizational context and IS success. Journal of Management Information Systems, 6(5), 5-20.
Mo Adam Mahmood is professor of computer information systems in the Department of Information and Decision Sciences at the University of Texas at El Paso. He also holds the Ellis and Susan Mayfield Professorship in the College of Business Administration. Dr. Mahmood is a past president of IRMA. He is presently serving as the editor of the Journal of End User Computing. He has also recently served as a guest editor of the Journal of Management Information Systems. Dr. Mahmood has recently been named as one the 2000 Outstanding Scientists of the 20th Century by the International Biographic Centre of Cambridge, England. Dr. Mahmood’s research and consulting interests center on the utilization of information technology, including electronic commerce for managerial decision making and organizational strategic and competitive advantage. On this topic and others, he has published over 65 technical research papers in leading journals and conference proceedings. Gary J. Mann is professor of accounting and chair, Department of Accounting at the University of Texas at El Paso. He received a PhD in business administration from Texas Tech University, and holds the El Paso Community Professorship in Accounting. His primary research interests include the effects of information technology investment on organizational performance and the impacts of ac-
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
AMERIREAL Corporation 109
counting and control systems on human behavior. He has published articles in such publications as the Journal of Management Information Systems, Behavior and Information Technology, Advances in Accounting, and others. Mark S. Dubrow is the vice president of Accounting and Strategic Financial Systems of AMERIREAL Corporation. Mr. Dubrow is responsible for the financial accounting for AMERIREAL’s operations and development activity. In addition, Mr. Dubrow is responsible for the design, interface and overall performance of financial systems. Mr. Dubrow has spent eight years in various accounting and financial positions with the Marriott Corporation.
This case was previously published in the Annals of Cases on Information Technology Applications and Management in Organizations, Volume 3/2001, pp. 21-31, © 2001.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
110
Lehmann
Chapter VII
The Australasian Food Products Co-Op:
A Global Information Systems Endeavour Hans Lehmann University of Auckland, New Zealand
EXECUTIVE SUMMARY
This case tells the story of a Food Products Co-op from “Australasia”1 and their attempt to create a global information system. The Co-op is among the 20 largest food enterprises in the world, and international information systems (IIS) have taken on increasing importance as the organization expanded rapidly during the 1980s and even more so as the enterprise refined their global operations in the last decade. Set in the six years since 1995, the story demonstrates the many pitfalls in the process of evolving an IIS as it follows the Co-op’s global business development. Two key findings stood out among the many lessons that can be drawn from the case: first, the notion of an “information system migration” following the development of the Global Business Strategy of the multi-national enterprise through various stages; second, the failure of the IIS to adapt to the organization’s strategy changes set up a field of antagonistic forces, in which business resistance summarily killed all attempts by the information technology department to install a standard global information system.
BACKGROUND
Marketing authorities for land-based industries (such as fruit growers, meat producers, dairy farmers, forestry, etc.) are often large companies with a strong international presence. The Australasian Food Products Co-op 2 (the “Co-op”) with some $5bn revenue is one of the largest. Like most of the others, the Co-op is a “statutory Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
111
monopoly”, as there is legislation that prohibits any other organization from trading their produce in international markets. With about a quarter of its revenue from raw materials and manufacturing outside Australasia, the Co-op is a mature transnational operator. Structured into nine regional holding companies, it has a presence in 135 offices in 40 countries. The 15,000 primary producers are organized into 35 cooperative “Production Companies” (ProdCos), where they hold shares in proportion to their production. The ProdCos, in turn, own the Co-op. This tight vertical integration is seen as a big advantage. It allows the Co-op to act as one cohesive enterprise and to develop a critical mass needed in most of its major markets. Figure 1 shows this structure and the product flow. The Co-op distributes its profit through the price it pays farmers (by way of the ProdCos) for their produce. Because of this, there is no ‘profit’ in the normal sense shown in the Co-op’s accounts, which makes the traditional (financial) assessment of the Co-op’s performance somewhat difficult. Similarly, because the Co-op lays down a demand forecast for each product by volume and time, which determines production schedule of the ProdCos, their performance is difficult to measure. In the absence of hard and fast facts, then, the interaction between the Co-op and its owners is a highly political one. Particularly the larger ProdCos have been pushing for some time to relax the Co-op’s monopoly, so that they may export on their own account, arguing that they could achieve a higher return for their farmers. The Co-op counters this by pointing out that it has maximized the average return to all farmers: they could not allow a second party to “pick the eyes” of its most lucrative markets and leave the Co-op to deal with the difficult and marginal ones to the detriment of the majority of dairy farmers.
Figure 1. Business structure of the Co-op
Food FMCG (Retailers,Wholesale) Manufacturers Small Offices Small Offices Small Offices
Large Offices
Regions
Non-Coop Consumer Products
Food Services
Manufacturing Sites Processed
Goods
Large Offices
Regions Regions
CO-OP
Ownership
Industrial & Consumer Products
PRODUCTION COMPANIES PROD-COs PROD-COs PROD-COs PROD-COs PROD-COs
Ingredients
Processed
Raw Product
Primary Producers Primary Producers Primary Producers Primary Producers Primary Producers
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
112
Lehmann
Figure 2. Mix of the Co-op’s markets by market/industry segment Sales by Business Segments/Markets Consumer 23% Other 41%
FoodService 2%
Ingredients 34%
NATURE OF THE CO-OP’S BUSINESS
With the exception of 2% of its revenue, the Co-op deals exclusively with basic food products and their derivatives. It divides its operations into three main business segments. These are defined as follows: 1. 2. 3.
Consumer Market — Product is sold to the consumer in a consumer pack under a brand owned by the Co-op. Food Service Market — Product is supplied directly to third parties who prepare and generally serve food, meals, snacks away from home. This is a new and very fast-growing segment. Ingredients Market — Product is sold to third parties who are usually industrial food manufacturing companies; can also be any other industrial manufacturing companies (e.g., clear plastics in the North American market).
In addition, there is a large group of “other” markets, which is a mixture of non-brand sales, non-own-product sales and covers a very large variety of minority and speciality operations and markets. Figure 2 shows the proportions of the four business areas. The Co-op’s major markets are in Asia, the Americas and Europe,3 which make up 85% of its revenue. Australasia,4 the CIS,5 and the Middle East6 make up the rest. Figure 3 shows the distribution of the Co-op’s business over the major regions.
The Co-op’s Operations
There are, in principle,7 two types of operations that the Co-op is involved in:
1. 2.
Distribution — This includes the sale efforts, the warehousing of product, as well as the logistics of obtaining and delivering it. Manufacturing of product — Mainly branded goods.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
113
Figure 3. Geographical distribution of the Co-op’s business
33%
ASIA 31%
THE AMERICAS 21%
EUROPE 9%
AUSTRALASIA CIS
33% %
MIDDLE EAST
22% %
0%
5%
10%
15%
20%
25%
30%
35%
The manufacturing operation includes the manufacture from raw material as well as the re-packaging of product from mass transport units into end-consumer products. It does not, however, include activities that are carried out by the ProdCos, albeit to the Co-op’s specifications: (a) (b)
The primary manufacturing processes that results in the required variety of raw materials for further use; The manufacturing of end-product in the primary processing plants: although this occurs to the Co-op’s specifications and orders, the actual process is outside the Co-op’s management area.
Figure 4 illustrates the operations flow within the Co-op. Collecting the fresh product, all primary processing and some further processing (to produce end-product) takes place under the control of the ProdCos, on whom the Co-op places its orders. These are either for end-product (branded consumer products and ingredients) or raw material for further manufacturing in the regions. There, additional, local, raw materials may be used for the production of — mainly — branded consumer product. Customers in the three main markets (consumer, food-services and ingredients) place orders and take deliveries locally. The main markets also have an influence on the Co-op’s operation — the nature of the order-to-delivery cycle is different for each of them: 1.
Consumer products are mostly sold through large retailers, and often special arrangements are made for each individual retailer; an example is the arrangement with Sainsbury’s, a large food retailer in the UK: there the Co-op just stocks their shelving space within the Sainsbury’s shop on a straight replenishment basis. Sainsbury’s pays for it once it has passed through their EFTPOS system. Instead of a traditional cycle of order-delivery note-invoice-payment (and all the reconcili-
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
114
Lehmann
Figure 4. Schemata of the Co-op’s operations Primary Producers
FRESH PRODUCT Primary Processing
Ingredient Product
End Product Manufacturing
Raw Material
ProdCos
Co-op
Local Raw Material
Ingredient Sales Orders
Deliveries
International Co-op Subsidiaries
Local Manufacturing & Packaging Food Services Sales Orders
Deliveries
Consumer Sales Orders
Deliveries
Customers and Markets
2.
ations in between), the Co-op’s in-store stocks are reconciled against the (electronic) EFTPOS records. Payments are made electronically into the Co-op’s account. Food services are either: (a) (b)
3.
Individual enterprises that order by telephone or by collection; cash purchases are frequent; or they are Large franchises (fast foods, such as McDonalds, theme restaurants like Sizzlers) where often delivery is to central distribution points and orders are issued electronically or in the form of standing orders.
Ingredients are sold to industrial goods manufacturers; they are often a critical raw material and as such are required in a just-in-time fashion. Orders are often received electronically; and payment can be on delivery with corrective credit (if any) passed back at any time. Two additional factors shaping the nature of the Co-op’s operations are:
1.
2.
Size of the Local Office — Large offices, dealing with large numbers of transactions tend to have different, more regimented systems than small ones, where flexibility and human interaction are often critical to business success, but are frequently inimical to formal systems; Environmental sophistication — This has two, albeit interrelated aspects: (a)
Market sophistication: Co-op revenue comes in equal parts from sophisticated markets like U.S. manufacturing, UK and Japanese food retailing on the
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
115
Figure 5. Co-op revenue between OECD and developing countries 1,200
1,000
800
$ m
600
400
OECD Countries: $m 2,569
(b)
c fi
S CI
ci Pa
Ze N e al w an d E a So s t ut A h si a A La m ti er n ic a M E a idd l st e
A No m r e r th ic a Eu ro pe
0
N A ort si h a A us tra lia
200
Developing Countries: $m 2,450
one hand, and developing areas like Latin America and Asia where business processes are often somewhat simpler, but business practices are habitually very different and technologies can be a hybrid between ancient and leading edge. Figure 5 depicts the split in revenue between developing and developed countries and regions; Use of computerised operations support: The Co-op operates across a variety of developmental stages, ranging from offices where more than four out of five staff work their own PC/terminal (as in the new office in the CIS, the former Soviet Union); the Latin America region, where less than one in six works his own PC/terminal; and to local offices like Egypt (within the European, Mediterranean and North and West Africa regions) with virtually no computers at all involved in the operations.
The Co-op’s business operations show a great variability with respect to a number of key variables. Revenue per employee varies widely from places that deal mainly in large volume, commodity-type products (as in the CIS) to places with high competition from a large domestic industry such as the UK and Europe. Not as wide is the variation of price per ton of product across the regions. However, it still oscillates from minus 30% through to plus 50% of the average price. Figure 6 accumulates all the deviation comparisons in one graph to convey the overall variability in the Co-op’s business — as an indication of the large differences of the underlying regional operations.
External Environment
Over the last 20 years, the external environment for the Co-op’s products and markets has become increasingly more demanding. Prior to that, the Co-op exported the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
116
Lehmann
Figure 6. Variation of key indicators across the Co-op’s regional operations (deviation from mean in % of the mean)
Value per t ($) Staff/PC
10
Rev/Staff ($000) 9
8
7
6
5
4
3
2
1
-100%
-50%
0%
50%
100%
150%
200%
250%
300%
350%
vast majority of its produce to a small number of European countries who used to accept it all and in some instances even created customs barriers to protect this trade. Once the Common Agricultural Policy (CAP) was made binding for all EC members, however, all such barriers had to go and other European countries flooded the market with heavily subsidized product. Virtually overnight the Co-op had to develop new markets. The food products industry in Australasia shrank dramatically in the initial years, but by 1990 had recovered equally dramatically. The remaining farmers are now very efficient high volume producers and the Co-op had turned itself from a producers club into a very effective international operator. Although still only a tenth the size of the largest international food corporations,8 it has now moved into the same league as most major international food merchants. The establishment of the World Trade Organization has brought an era of high opportunity for the Co-op. The new organization will for the first time introduce new trade rules for agricultural products, which will benefit the Co-op in two ways:
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
•
•
117
They would dictate a gradual, but significant widening of the access to markets for all participants in the world trade; in the past there have been direct barriers (such as in the EC) and often indirect, non-tariff hurdles to market entry (such as the USA) which have very effectively hindered the Co-op’s expansion. Limits have been introduced to the amount of export subsidies producer countries can grant their dairy industries.
This new set of rules works very much in favor of the Co-op with its high degree of vertical integration, its efficient producers and processing operations, and its wellfunctioning and widespread marketing organization.
BUSINESS BACKGROUND: THE CO-OP IN THE 1980 S AND 1990 S
Until the onset of the 1980s, the Co-op had maintained a small number of ordertaking and warehousing operations mainly in the UK and Europe. With the dwindling away of those easy markets, the imperative was to find alternative places to sell dairy produce to. In rapid succession a number of subsidiary offices was set up and agencies nominated to deal with such markets as the U.S. The main emphasis in this period was to shift product, to establish toeholds in new markets and to start building the foundations for a more permanent and long-termoriented market presence later. The most effective way of achieving this was to staff these sales offices with capable and aggressive individuals and “let them get on with it”. As long as they managed to “move” their quota, they had a high degree of freedom as to how they did it and with whom. This policy of local autonomy worked very well and achieved very much what was expected: within a decade the Co-op had built a presence in more than 30 countries and had managed to bring home, year after year, a regular, if not sensational, income for the all the ProdCos’ primary producers and farmers. At the onset of the 1990s, however, competition in the Co-op’s main markets had become strong and increasingly global. With the emergence of global brands (such as Nestlé, Coca Cola, McDonalds), the Co-op needed to develop global brands themselves. For this, they had to have sufficient command and control to mount synchronized international marketing and logistics operations. The main focus was on determining where to strike the balance between preserving the proven “freedom” formula with the need to control and “dictate” operations with a fast global reach. The high degree of local autonomy continues to maintain a great enthusiasm within local management, and the corresponding momentum still pushes the business at very good rates of expansion. On the other hand, global branding and parallel, multichannel distribution have proven their usefulness in launching new products into diverse markets with widely divergent levels of customer and/or retail sophistication such as in Europe and South East Asia. The arrival of a new chief executive officer in late 1994 began to instill purpose and urgency into this reorganization process. The Co-op began a concerted campaign to shift authority and control over branding and global marketing policy back to head-office. The
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
118
Lehmann
Figure 7. The Co-op’s migration of global business strategy High
Trans Trans National National
Global Global
U.K. Exports
Global Control
Inter Position in Inter National Case Period National
HQ driven Consolidation
Low
r St
at
ic eg
n sio Vi
Global Expansion Multi Multi National National
Low
High
Local Autonomy
CEO’s vision was one of balanced central control and local flexibility. Figure 7 shows this development, using the Bartlett and Ghoshal9 framework. Part of this new outlook was a critical look at the Co-op’s operations. In early 1995, partly for political reasons and partly to review the results of the past efforts to shift the strategic focus of the organization, the Co-op commissioned the Boston Consulting Group (BCG) to carry out an evaluation exercise of how the Co-op fares when compared with a benchmark of international best practice. Table 1 summarizes the results of their assessment report. While overall very complementary of the Co-op’s achievements, BCG were very critical of the amount of counterproductive politicking going on in the industry. They specifically focused on the efforts of the larger ProdCos to undermine the role of the Coop and to move towards independence. Criticizing this strongly, the consultants reiterated the importance of size and critical mass for an international food marketer. Regarding its internal business functions, BCG assessed the Co-op itself a fit and effective enterprise. The Co-op was strongly commended on its strategic roles, but did
Table 1. Summary of the BCG evaluation of the Co-op’s management practices Objective setting Statutory Reporting Organizational Structures Management Reporting Systems In line with best practice Contrary to best practice
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
119
not fare so well with regards to its organizational structure and, particularly, with respect to its management control and systems. The Co-op accepted the critique of a fragmented, unclear and unfocused organization structure — an inheritance from its past rapid growth — and put in place a streamlined and efficient organizational structure. As a consequence of the low benchmark for management control systems, the CEO took a personal interest in the establishment of the Food Information Systems and Technology (FIST) project, which focused on business processes and control systems throughout the Co-op’s operations worldwide.
THE IS LANDSCAPE IN 1995
During the 1970s and early 1980s, the Co-op had built up a sizeable IS department with a mainframe operation at the head office, linking up with all the main subsidiary offices and ProdCos throughout the country. Foreign activities were few and hardly needed computer support. The forced expansion drive in the late 1980s, however, led to an increased need by local operations to be supported with information systems. By the mid-1990s a number of regional offices had bought computers and software to suit their own, individual requirements (as shown in Figure 8). Furthermore, by then each installation had selected their own software to suit their own requirements. However, reporting and ordering procedures back to the Co-op’s head office were often manual and/or involved re-keying output from their local machines. Furthermore, different capabilities, varying data structures of the local systems, often meant that it was not always clear whether figures reported on a particular subject did indeed follow the same definition throughout all the countries. Monthly routine reports were received by the Co-op within anything from one week to three weeks after the event. Some special reporting requirements required widely differing efforts to implement and often could not be obtained across all subsidiaries.
Figure 8. The Co-op’s configuration of information technology
DEC 156 IBM
132 39
63
IBM 64
48 SUN 23 18 HP
25 34 45
81 HP 15
AS/400 HP
22
43
41 90
34
450
AS/400
IBM
70
194 DG
MFR/Mini
123/123 Nr of W/Stations & Terminals
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
120
Lehmann
The same difficulties (differing definitions) also plagued the attempt to install a brand/product reference system across some or all of the subsidiaries. Each local information system had implemented its own product numbering systems, based on its own, different logic, often in the form of composite numbers with an in-built structure. Against this background of a proliferation of loosely, if at all, coordinated local systems and a declared will from the Co-op’s center to impose some more control over the enterprise as a whole, the IS Department in April 1995 established a “Framework for Information Systems”. This was a declaration in principle that in the future there would be information technology standards throughout the Co-op’s operating companies. In December 1995, this standards framework was extended to become the “Charter for FIST”, intended to facilitate implementation of: • • •
Standard and common information definitions and formats across all the Co-op. Common information systems for the Co-op’s business. Integrated flow of information from order to delivery, with electronic interfaces reducing the number of interrupt steps in this process.
The Co-op’s Board formally ratified the FIST “Charter” and the underlying framework. The Co-op’s IS department subsequently interpreted this ratification as a mandate to develop common information systems for all of the Co-op’s operating companies. The following tasks and subprojects were identified within FIST: 1. 2. 3. 4. 5.
Business Engineering — This was to straighten out order processing and inventory control throughout the Group. The Global Applications Project (GAP) — With the specific aim of designing and implementing common systems throughout the Group. Corporate Reporting — That is, common information standards for regular reports. Technology Standards Definition. Tele-Communications Strategy.
As a precursor, a “Benchmarking Project” was carried out. This was a basic factfinding exercise to define the current position and to establish a basic and common understanding of the Co-op’s business operations and processes. The outcome of these exercises was a “business model”, outlining where various business functions should be placed and the basis for this placement. A further outcome of the project was an estimate that the Co-op was spending approximately $80m on information systems, and it was expected that this would increase by not having common systems. The business model was later named the Enquiry To Cash (ETC) model and would serve as the single process of supply and demand management across all of the Co-op’s subsidiaries as well as the interface with the ProdCos. This is shown in Figure 9. A first statement was also made at this stage about the style of the project, in response to a query by the head of the Latin America office — who was worried that the FIST project would, from HQ, dictate a new, common, global system which would make his considerable investment in information systems obsolete. The CEO assured him that “While the Group Information Systems project will initially focus on two pilot sites, all regions will be involved and kept informed”. Furthermore, “each Region would be asked Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
121
Figure 9. The ETC model of business operations
Region al Office
Co-op HQ
ENQUIRY – TO – CASH Cycle
ProdCo
to nominate representatives who will be fully involved as members of the project team” and “the FIST project will concentrate more on information and common reporting standards” so that “there is as yet no intention of requiring that each region adopt the same computer system”. The first project strategy and plan for FIST foresaw the following main stages: 1. 2. 3.
Development of a prototype system with a site which is reasonably representative for most of the Group’s offices. Implementation of the prototype in a small number of pilot sites; further adaptation of the prototype to make it functional as a global system. Gradual “roll-out” of the “global system” into selected regions.
Estimated completion dates were late 1996, early 1997 and mid-1998 respectively. North America, by the end of 1995, had started to embark on a review of its information systems. Their old IM system was becoming obsolete and the software was Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
122
Lehmann
also in need of a functional upgrade. The South East Asia region was by then also looking to upgrade their fragmented PC-based installation with a more coherent information system to cope with the rapid growth the region was experiencing. Both sites thus became the natural candidates for the development of the prototype and also as pilot sites for further implementation: • •
North America has a large ingredients market and would become the prototype for the Co-op offices serving this business sector. Singapore is the office looking after South East Asia; they have a large consumer market, which would make them a good prototype for all consumer and/or mixed business offices within the Group. FIST began in earnest with the dispatch of a team to North America in March of 1996.
The North America Pilot
Having realized that their system needed upgrading, the North America Region was now very keen to go ahead with the replacement project as fast as possible. However, as this was to be the pilot for a global system, the FIST team was mainly interested in obtaining a thorough requirements specification for use as the foundation for further globalization. Negotiating these pressures, the FIST team, after replanning its efforts, compromised and agreed to January 1997 (nine months hence) as the date for going live with the new North American system alias first FIST pilot. At the same time Singapore started the process of looking at their requirements. They expected that the FIST team would do this and were quite concerned when the FIST team restricted itself to comparing the ETC business model they had adapted for North America with the South East Asia region and found a 90-95% match. Following some heated discussions between the South East Asia regional management and the FIST team, it was agreed that they would go ahead and, “for the time being”, carry out an update of their existing application systems. The South East Asia regional general manager later extended the terms of reference to allow, “for the time being”, an upgrade of equipment “to run the updated software efficiently”. He was also very critical of what he called the “top-down-approach” taken by FIST. With very little influence or participation by the regions, he feared it would be like the other “past failures of the Computer Centre”. For all these reasons, the South East Asia region “bulldozed” their proposal for an independent information systems effort through the next executive meeting. By the middle of 1996, North America was the only pilot site. Time pressure was beginning to take its toll on the style of the team: FIST management was now actively encouraging the narrowest possible user participation in order to deliver a system by the January 1997 deadline. They were now excluding further input from North America and started to drive the requirements specification predominantly from HQ, using the ETC model as the basis for the “engineering” of the new business processes. In reaction to this, at a Finance conference in November 1996, North America and the other regional managers issued a strongly worded memo demanding broadly based involvement, to avoid wasting effort on a system which, they felt, would ultimately not support their business. The FIST manager complained: “The finance conference has attempted to Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
123
change the rules with regard to FIST. Prior to this, we were responsible for progressing the approach and we would keep the other regions informed. Now it was suddenly ‘agreed’ that every man and his dog would be involved. The FIST timetable cannot absorb this extra involvement without bursting”. A timetable was attached to this memo to the CEO to show that the project would take three times as long and cost five times as much if participation by other regions was to be allowed. The CEO sided with the FIST team and issued a circular mandating that the FIST project be fully supported by everyone. As the North America pilot project was still aiming for the January 1997 deadline, two parallel activity streams were developing. First priority was given to producing and issuing a Request for Proposal (RFP) for software and hardware, to be used internationally as the base modules for the global system. Simultaneously, and as an afterthought, the CEO was now calling for a more detailed Cost Benefit Analysis.
THE GLOBAL REQUEST FOR PROPOSAL (RFP)
In June 1996 the FIST team started to document the set of requirements for North America, which, as they were based on the ETC model, could be expanded to a global level. To do this, a mixed team from North America and FIST was assembled at the Coop’s HQ. The RFP was sent to all the regions to comment. These comments together with the reaction by the FIST team are shown next in Table 2. The RFP, asking for firm quotes for software, hardware and communications technology, was finally issued in late September 1996. Replies were expected in early January 1997, in order for the selection to be concluded and for the FIST team to put together a capital expenditure proposal in February 1997. After a rapid evaluation, mainly by the FIST team with some North America input, ORACLE was chosen as the main provider for database middleware and, together with
Table 2. Reactions by the FIST team Concerns and Comments The requirements for the Consumer and Food Service (together approx. 60% of the Co-op’s business) are not covered and examples were given;
FIST Reaction/Action The comments and examples of the Europe and South East Asia regions were summarised and inserted into the RFP as an addendum;
The January 1997 deadline is unrealistic;
“January 1997” remains unchanged”;
Since the systems and technology chosen will become a global standard, the regions want participation in the evaluation and selection process following the RFP;
“Discussion will continue to ensure that we achieve a reasonable balance of regional; involvement without impacting the timetable”;
The concept of common systems for [the Group’s] “core information systems” is strongly questioned (wide differences in the business) and not accepted by key regional management.
A list of “core information systems “ will be prepared for Executive agreement.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
124
Lehmann
DATALOGIX, for applications software. No decision was made on either the hardware or the communications technology proposals other than to exclude IBM and Andersen Consulting. Privately, however, HP hardware and EDS as the main contractor for the worldwide communications network (and possibly as a support management contractor for all local technology) were favored. Back in North America, the emphasis now switched from requirements analysis to the design of the new system.
COST BENEFIT ANALYSIS
Six main areas of benefits were identified by the FIST team: 1.
2. 3. 4. 5. 6.
Streamlined Information Flows — In the current flow if information, specifically the Enquiry-to-Cash flow, “there is a high number of interrupt points, few of which add value to the business transaction”, substituting this with an electronic information flow will eliminate duplication, but will require “a common business language”. Reduced Business Cost — Elimination of duplication (of data entry, of reconciliations, etc.) and the introduction of management by exception are expected to result in staff savings. Reduced Information Systems Costs — Common subsystems are supposed to forestall duplication of development efforts, technology watch and reduce the cost of information systems management (by concentrating it in the center). Integration of Information Systems with Business Planning — The common global systems will be an “integral part of the business rather than an external ad hoc activity”. Enhanced Reporting — Common systems are expected to make it easier to introduce standard Key Performance Indicator reports, enable easier implementation of Executive Information Systems and would shorten report cycles. Implementation of Agreed Key Principles for System Configuration — A set of “Five FIST principles” for system design and configuration was issued to all regional management.
The FIST team then estimated the dollar value of the benefits, restricting themselves, however, to the estimation of efficiency benefits. A summary of the estimates of these direct savings values is shown in Table 3. The major costs of the project were estimated as shown in Table 4. A Net-Present-Value evaluation of the FIST project showed that over five years it represents an NPV of $10.18m, showing an internal rate of return of some 70% and a payback of about two years.
THE COMMON APPLICATION SYSTEMS ISSUE
Following the publication of the business and functional requirements analysis for North America, the other regions asked for a clarification of what precisely the FIST team Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
125
Table 3. Summary of maintenance Savings Description
Redundancies
$ Value p.a.
Recurring Savings Elimination of jobs within the business operations Elimination of jobs within Information Services Reduction of capital/replacement costs
100 (central) 65 (local) 25
Total annual savings
$ 8.25m $ 1.25m $ 8.51 $ 18.01m
Once-off savings Avoidance of replacement costs, central Avoidance of replacement costs, local
$ 1.55m $ 4.3m $ 5.85m
Table 4. Estimated major costs of the project Cost Description FIST Project Costs (including the pilot projects in North America and UAE); Streamlined Order-to-Delivery (pilot and prototype)
$ Value $ 6.3m $ 6.7m
Global roll-out costs
$ 20.9m
Total FIST Costs
$ 33.9m
of which: Capital Expenditure
$ 14.4m
External (Consultants, etc.)
$13.3m
Internal costs
$6.2m
meant by “common information systems”, specifically what was meant by common application systems. In response to this a separate exercise was initiated late in 1996 and then carried out by the FIST team at the Co-op’s HQ back in Australasia. As a result of this assignment, in February 1997 a list of the core applications was assembled. In it, the core and non-core applications were determined according to the following definition: •
“ Core applications are defined as those which organizations participating in FIST must implement in order to: (a) (b) (c)
Manage their business to meet required goals and objectives; Fully support the Five FIST principles of business function placement; and Meet the information needs of other units of the Co-op group of companies.”
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
126
Lehmann
Figure 10. FIST Core and Non-Core applications Small Offices Manufacturing Sites Large Offices Regions
Global IS (“Core”, based on ETC, the Enquiry To Case cycle)
CO-OP
Production Interface
•
Local Application Variations (“Non-Core”, some Manufacturing, Marketing, some Admin)
ProdCos (30 Factories)
Non-Core applications are defined such: “The internal workings of processes by which some organizational outcomes are achieved are of no interest outside that organization. When an organization chooses to use an automated application to meet such needs, that application is considered to be non-core, no matter how essential it may be to the delivery of the outcome.”
In the ensuing discussion it was argued that the definition of core applications is too wide and particularly points (a) and (b) would apply to any application within any organization. Figure 10 depicts the intended split between the proposed “Core”, that is, the global standard applications, compulsory for every regional and local office, and the “Non-Core”, that is, local application systems. The Core and Non-Core applications, together with an indication whether they were part of the standard FIST application packages,10 are listed in Table 5.
Developing the North America Prototype and Pilot
Having decided upon the ORACLE/DATALOGIX software as the Group’s standard application system, the FIST team began with the implementation of the software in the North America region, and very soon encountered serious problems. The manufacturing and distribution modules would not conform with the business processes they were selected to support. FIST responded with setting out a policy that “where a choice existed between the change to business practice or a change to the software system, the business practice would be changed by default.” The North America regional manager refused to change business practices which had been developed in response to market and operational reasons. Furthermore, it turned out that software changes needed to be carried out by Datalogix because it transpired that they were going to affect the very kernel of the application. The changes were estimated to cost $1.8m. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
127
Table 5. Core and Non-Core applications Core Applications Inventory
Standard? Oracle/Dist*
Non-Core Applications Manufacturing
Purchasing
Oracle/Dist*
Transport
Import
Marketing
Planning
Fixed Assets
Sales
Oracle/Dist*
Treasury
General Ledger
Oracle/Fin
Office Automation
Accounts Receivable
Oracle/Fin
Payroll
Accounts Payable
Oracle/Fin
Export
Standard? Datalogix
Project Management
Consolidation Product Cost Profit & Loss Executive Information Retrieval * = used to be Datalogix, see below
However, cost was not the only hindrance to a resolution of the issue. Oracle had just entered into negotiations with Datalogix about acquiring the Datalogix Distribution modules and replacing them with their own ones — so that any investment in Datalogix software would be wasted. For the duration of these negotiations, all adaptation work on the software came to a standstill. Secondly, because the changes were not just to satisfy North America, but were meant to define the Global Distribution System, it became necessary to address the issue of different numbering systems and coding structures. One area where this was going to be a global difficulty was product and inventory codes. A special subproject was set up to come up with an integrated, international product code schema. This issue is still unresolved. Similar problems plagued the financial software suite. Because they, too, were candidates for a common Global Financial Application Package, a similar exercise was started to create a common system of account codes, suitable for all Co-op subsidiaries. The suggested solution had a 55-digit account number, with built-in logic to reflect the common chart of accounts. The project was abandoned after it was pointed out to the FIST team that in a number of countries (e.g., in most of continental Europe), the chart of accounts is prescribed by fiscal legislation. The use of “secondary” accounts is a felony in some countries, for example, Germany. By September 1997, the FIST project was nine months late and $3.5m over budget. The FIST team suggested to carry out a business process reengineering project in North America in order to implement the ETC model there, which would then subsequently become the norm for all the Group’s offices. The BPR project, however, began to go wrong very soon after its inception. The FIST team felt that they had a standard business model (developed at the Co-op’s HQ) and that the object of the exercise was to work out ways in which North America’s business Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
128
Lehmann
processes would change so that they fitted that model. North America was expecting a cooperative team project in which they would work out ways in which the North American operation could be improved. This conflict stayed unresolved until well into 1998. In the meantime, however, North America urgently needed to do some more work on its aging software — three years into the process of trying to get it upgraded. Because their hardware and software were now obsolete, this work took priority and led to a suspension of the FIST project in North America. In early 1998 North America therefore reached an agreement that the software could be altered so that it reflected North America requirements in the first instances. By the end of 1998, North America had established a stable computer system, using ORACLE financial software to North American specifications. Their regional management got agreement from the CEO that they were no longer considered the pilot project for FIST. The FIST manager summed up the situation in a presentation to the Co-op’s executive: “..the North America pilot...originally concentrated on streamlining the business, globally, from North America. One of the things that became obvious was that a global business could not be streamlined from a subsidiary. It needed corporate focus, hence the switch back to HQ. The time in North America was a very useful exercise to prove the software and highlight areas that needed special attention”.
The United Arab Emirates (UAE) Pilot
In late 1997, the Co-op decided to open a new office in the Middle East region, in Dubai. This was to support the local trade with the UAE and also help develop more trade into the southern Middle East. As it became clear that North America was not going to be a satisfactory pilot site, the FIST team decided to select Dubai as the new pilot site to test out the common global system for the Co-op — despite the fact that the UAE office “was only to have about a dozen people and probably does not really have any need to computerise any of its local operations”, as stated by the regional manager for Europe, who consequently opted out of the FIST project altogether. The first installation was going to be the “standard”, that is, unmodified ORACLE Financials and Distribution suite and business procedures would be defined around the systems. The first target date for completion was September 1998. However, for want of adequate local support, the systems could not be developed on site — it was therefore decided to develop the first prototype at HQ, back in Australasia. This delayed the implementation and by April 1999 the prototype was only 50% finished. Continuous difficulties with implementing a computer system in an unsupported and unskilled environment delayed the implementation of the pilot system further — after it had been “piloted” in Australasia it eventually was handed over as a working system in November 1999 in a reduced form, that is, the Financials only. However, it was considered a success and it was intended for use as a model for FIST implementation in other small offices. In 2000, the FIST manager foresaw that Hong Kong, South Africa, the Philippines and mainland China would be next on the list.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
129
Developments Concerning FIST at the New Zealand Dairy Co-Op Head Office in Wellington
The major difficulties with the FIST project began to attract the attention of the CEO. He was especially alarmed about: • • •
The missed deadlines; Significant costs (by the end of 1998 between $8m to $ 10m, approximately) without any noticeable benefits; and The refusal by North America, followed by Europe, both major regions, to accept the FIST system.
The FIST team persuaded him to become their major sponsor and in March 1999 a revitalized FIST was launched, now with the CEO fully behind it. He had been convinced that the main obstructions to FIST were “political” and intended not to tolerate any games. The North America regional manager came close to being sacked or demoted because of his refusal to adopt FIST. A regional manager in the European region, who had sent a memo criticising the FIST team for being high-handed, received it back with an invitation to attach his resignation the next time he sends “something like that”. Even executives at HQ began to regard criticism of FIST as possible career-limiting moves. In response to this, the regions were distancing themselves from the project, which they now saw as a wholly head-office owned thing. However, the continuing lack of success and ever-increasing costs of the FIST soon led to a change of mood within the CEO. In November 1999 Ernst & Young was commissioned to evaluate the FIST and the related ETC and other projects. Although noncommittal in the version for public consumption, their report was said to be scathingly critical of both projects, as being overly ambiguous and basically not achievable within the timeframe or project setup in existence. This proved to be a turning point: the Co-op’s executive Board meeting of January 20, 2000 the CEO reassigned the IT portfolio — and with it the FIST project — to the general manager of Finance, who had been an open critic of the project for a long time. Advocating that business reasons should drive the project, he had called for a critical review of the reasons for wanting to spend $35m as early as March 1998. In rapid succession the FIST project manager as well as the deputy manager for FIST resigned.
Current Challenges
The new FIST manager carried out a post-mortem review of the project. Its major conclusions were that: 1. 2.
The importance of strategy alignment between the multinational firm and its information technology function in terms of the Global Business Strategy followed by the enterprise had been seriously underestimated. The requirements for professionalism and in-depth understanding of international issues are essential for even starting an international information systems project.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
130
Lehmann
3.
Failing to recognize both had led to unproductive political interaction, which detracted from the objectives of the exercise and ultimately resulted in the demise of the project altogether.
Following these findings, a “conciliation” committee was formed, consisting of regional business managers and information technology people. After some deliberations, the committee has now commissioned a new, follow-on project — led by the manager of the Europe region — to reposition the thrust of the strategic planning efforts with respect to information technology. The primary objective of the new, rolling Information Technology Strategy will be to improve the effectiveness of the Co-op’s operations by contributing to the recognized “Core Competencies” of the Co-op. The acquisition of any technology to support this will not be an objective in itself but rather a result of the primary objective. In parallel, the CEO formed an executive project team to review the organizational and authority boundaries between the central, head-office functions and the responsibilities and accountabilities of the regional and local management teams. The outcome of this review is expected to foster a constructive discussion on how best to achieve the balance of central control and local autonomy that the Co-op’s global business strategy requires.
REFERENCES
Butler Cox plc. (1991). Globalisation: The information technology challenge. In Amdahl Executive Institute research report (chaps. 3, 5, 6). London. Ives, B., & Jarvenpaa, S. L. (1991, March). Applications of global information technology: Key issues for management. MIS Quarterly, 33-49. Ives, B., & Jarvenpaa, S. L. (1992, April). Air Products and Chemicals, Inc: Planning for global information systems. International Information Systems, 78-99. Ives, B., & Jarvenpaa, S. L. (1994). MSAS Cargo International: Global freight management. In T. Jelass, & C. Ciborra (Eds.), Strategic information systems: A European perspective (pp. 230-259). New York, NY: John Wiley and Sons. Jarvenpaa, S. L., & Ives, B. (1994). Organisational fit and flexibility: IT design principles for a globally competing firm. Research in Strategic Management and Information Technology, 1, 8-39 King, W. R., & Sethi, V. (1993). Developing transnational information systems: A case study. OMEGA International Journal of Management Science, 21(1), 53-59. King, W. R., & Sethi, V. (1999). An empirical assessment of the organization of transnational information systems. Journal of Management Information Systems, 15(4), 7-28. Lewin, K. (1952). In D. Cartwright (Ed.), Field theory in social science: Selected theoretical papers (chaps. 5, 8). London: Tavistock. Tractinsky, N., & Jarvenpaa, S. L. (1995). Information systems design decisions in a global versus domestic context. MIS Quarterly, 19(4), 507-534. Van den Berg, W., & Mantelaers, P. (1999). Information systems across organisational and national boundaries: An analysis of development problems. Journal of Global Information Technology Management, 2(2), 32-65.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
131
FURTHER READING
The following are excellent primers into the specific issues surrounding international information systems and the research in this field: Applegate, L. M., McFarlan, F. W., & McKenney, J. L. (1996). Corporate information systems management, text and cases (4th ed.) (chap. 12). Chicago: Irwin. Collins, R. W., & Kirsch, L. (1999). Crossing boundaries: The deployment of global IT solutions. Practice-driven research in IT management series™ (chaps. 4, 5). Cincinnati, OH. Deans, C. P., & Karwan, K. R. (Eds.). (1994). Global information systems and technology: Focus on the organisation and its functional areas (chaps. 3, 5-8). Hershey, PA: Idea Group Publishing. Deans, P. C., & Jurison, J. (1996). Information technology in a global business environment — Readings and cases (chaps. 5-7). New York: Boyd & Fraser. Laudon, K. C., & Laudon, J. P. (1999). Essentials of management information systems organisation and technology (3rd ed.) (chap. 15). Upper Saddle River, NJ: Prentice Hall. Palvia, P. C., Palvia, S. C., & Roche, E. M. (Eds.). (1996). Global information technology and systems management — Key issues and trends. Ivy League Publishing, Ltd. Palvia, S., Palvia, P., & Zigli, R. (Eds.). (1992). The global issues of information technology management. Hershey, PA: Idea Group Publishing. Roche, E. M. (1992). Managing information technology in multinational corporations. New York: Macmillan. A number of cases of less than successful international information systems are contained as the (often historical) international cases in anthologies and monographs on large information systems failure, for example: Flowers, S. (1996). Software failure: Management failure (chaps. 3, 4-9). New York: John Wiley & Sons. Glass, R. L. (1992). The universal elixir and other computing projects which failed (chaps. 5, 6, 9). Bloomington, IN: Computing Trends. Glass, R. L. (1998). Software runaways (chap. 3). Upper Saddler River, NJ: Prentice Hall PTR. Yourdon, E. (1997). Death march: The complete software developer’s guide to surviving mission impossible projects (chap. 5). Upper Saddler River, NJ: Prentice Hall PTR.
ENDNOTES
1 2
Location is disguised. The case is based on the history of an international information systems project in a real enterprise. It has, however, been simplified, altered and adapted for use as an instructional case. For this reason the enterprise has requested to remain anonymous. Names, places and temporal references have been changed to disguise the enterprise; all monetary references are in U.S. dollars.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
132
Lehmann
3
Includes sub-Saharan Africa. Includes Australia, New Zealand, Polynesia, Micronesia, and the Philippines. Commonwealth of Independent States, the former Soviet Union. Includes Northern Africa. The following description is substantially simplified in the interest of clarity. Nestle, a large public food company, has annual sales of around $bn 50; Cargill, a privately held food company, has a revenue of $bn 47. Bartlett and Ghoshal (1989) developed a framework for the classification of enterprises operating in more than one country, centered on the level and intensity of global control versus local autonomy. Global firms maintain high levels of global control while Multinationals give high local control. Transnational organizations balance tight global control whilst vigorously fostering local autonomy. This strategy of “think global and act local” is considered optimal for many international operations. Internationals are an interim state, transiting towards a balance of local and global. For example, ORACLE and/or DATALOGIX software. The analysis that underlies the categories and relations mentioned in the following sections was derived from the full case, i.e., it draws on an information base that exceeds the details given in the case story in the chapter. It may, however, be used to extend any analysis based on the case as it is contained in the chapter. Italics denote the name of a category.
4 5 6 7 8
9
10 11
12
Appendix: Some Directions for the Analysis of the Case One way in which a case study can be analyzed is to look for the key factors that underlie the dynamics of its story. In sociology such factors are often referred to as the “categories” of a social scenario and their interplay is termed the “relations” between them. This terminology is used to suggest some directions for the analysis 11 of this case in the following paragraphs. The core categories found in the Co-op case fall into two groups, depending on whether the category stems from the business or information technology domain. In the business domain, six core categories were identified. They are: 1. 2. 3. 4. 5.
Nature of the Business12 : The aspects of the Co-op’s business that were relevant for the global IS project. Global Business Strategy: The relevant aspects of the Co-op’s current global business strategy. Lack of IT at the Headquarters: The Co-op’s inexperience in IT and lack of “IT awareness” culture. Strategic Migration: The history of how the Co-op’s global business strategy has evolved over the years. Tradition of Autonomy: The degree of freedom enjoyed and defended by regional and local management.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
6.
133
Rejection of Global IS: The actions and manoeuvres to avoid the acceptance of a global standard IS.
The uniqueness of the Co-op’s Nature of Business has the most fundamental influence on the case. It determines the essential characteristics of the Global Business Strategy. Similarly, the Strategic Migration (i.e., the evolution of the Co-op’s global business strategy) has a significant impact. All three of these factors combine to establish a distinct Autonomy Tradition among the Co-op’s local offices and regions. Although assisted by other influences, this Tradition of Autonomy is the main determinant of the Rejection of the Global Information System, which in itself becomes a major influence in the development of the global system. The other information technologyrelated category is the Lack of IT Experience at the Co-op’s headquarters. That naivete with respect to information technology at the Co-op’s center reinforces the Tradition of (local) Autonomy, this time with respect to the management of local and regional information systems. There are seven core categories in the information technology domain: 1. 2. 3. 4. 5. 6. 7.
Analysis: The main assumptions and paradigms governing the analysis of the business requirements for the global IS. IS Professional Skills: Quality of the professional skills brought to bear on the systems development process. Domestic Mindset: Inadequacy in the understanding of international issues. Conceptual Capability: Ability to conceptualize and think through complex issues. IS Initiative: The global IS project was initiated by IS, and not from the business. Global Standard IS: The standardized, centrist nature of the global IS design (“one system fits all”). IS by Force: Using political power play to force the business to accept the global IS.
A group of four categories shape the character of the remaining three categories in this group. The most fundamental of these “conditioning” categories is Conceptual Capability, that is, the ability to deal with business complexities. Low conceptual capability has a negative effect on the three other conditioning categories. This is most noticeable for the Analysis category, the agglomeration of paradigms used in system design — most of which were fallacious as a consequence. The equally low quality of the IS Professional Skills and the pronounced inadequacy to comprehend international issues, summarized in the Domestic Mindset category, were further, negative influences on the Global Standard IS. In concert, the four “conditioning” categories shape the simplistic and inadequate design of the IIS, that is, the standardized and centrist nature of the international IS encapsulated in the Global Standard IS category. The remaining three categories have direct interactions with the Business Domain: • •
Global Standard IS has the strongest interface with the business domain. IS Initiative developed as a response to perceived inefficiencies in the international operations of the Co-op; the category also has a strong element of using the IIS as an instrument of central, head-office control.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
134
Lehmann
•
IS by Force is the way in which the IT people resort to political power-play as a result of their inability to deal in a rational way with the business people’s rejection of the global system.
In addition to the dynamics within each domain, the two domains between them set up a force field in the sense of Lewin (1952). In this force field the conflicting interests of business and information technology functions play out in antagonistic and confrontational interaction. The force field interaction is dominated by the interplay between two key categories. The Global Standard IS category is the primary causal factor in the Rejection of the Global IS. However, both IS Initiative and IS by Force reinforce the rejection, albeit as a secondary influence. The forces acting in that field are of considerable magnitude and eventually engaged the opposing sides in a cycle of rejection and reaction, which in the end proved strong enough to stop the FIST project altogether. The Global Standard IS as the root cause of these confrontations, however, has another, deeper cause. The inappropriateness of the FIST system’s unbending standard design for all regions and local offices of the Co-op, without regard for their significant differences in size, business culture, markets or strategies, is an outcome of the FIST team’s erroneous interpretation of the wishes of the CEO. They translated the thrust for increased global coordination (to achieve a global business strategy for the Co-op that better balances central control with regional freedom) as a return to full central control, regressing to strategy stance the Co-op had taken in its earlier history. The regional business people correctly interpreted the FIST project as an attempt to roll back their autonomy, and resisted it strongly. Because they suspected the CEO of covertly backing this regressive move, they extended their resistance beyond the Rejection of the Global Information System to resistance also against the CEO’s thrust towards more regional autonomy — albeit balanced by an equal amount of central control over global concerns such as branding and global product strategies. The dynamics of the force field also have a further, second-order effect. Three of the main categories, namely Rejection of the Global Information System, IS Initiative and IS by Force, subsequently formed a cyclical cause-effect loop:
• • •
Rejection of the Global Information System intensifies the isolation of the IS people (expressed in the IS Initiative category). This augments their tendency to implement the IS by Force, i.e., attempting to achieve by political means what they could not do by rational cooperation with the business people. These political power plays, however, only serve to confirm and increase the Rejection of the Global Information System by the business people; this starts the cycle of rejection/isolation/politics all over again.
These cyclical interactions ran their course until eventually the CEO involved an outside element — the consultants — to provide an unbiased, third-party opinion. The cycle was then broken by the finding that FIST in its current form would probably never succeed to reach its objectives.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
The Australasian Food Products Co-Op
135
Hans Lehmann is a management professional with some 25 years of business experience with information technology. After a career in data processing line management in Austria and South Africa, he worked for some 12 years with Deloitte’s as an international management consultant. Mr. Lehmann specialized in the development and implementation of international information systems for multi-national companies in the financial and manufacturing sectors. His work experience with a number of bluechip clients spans continental Europe, Africa, the United Kingdom, North America, and Australasia. In 1991, he changed careers and joined the University of Auckland, New Zealand, where he focuses his research on the strategic management of global information technology, especially in the context of the transformation to international electronic business. Mr. Lehmann has 57 referred publications in journals and books and has spoken at more than 20 international conferences.
This case was previously published in F. Tan (Ed.), Cases on Global IT Applications and Management: Successes and Pitfalls, pp. 1-30, © 2002.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
136 Forgionne
Chapter VIII
Better Army Housing Management Through Information Technology Guisseppi A. Forgionne University of Maryland, Baltimore County, USA
EXECUTIVE SUMMARY
The Department of the Army must provide its personnel with acceptable housing at minimum cost within the vicinity of military installations. To achieve these housing objectives, the Army often must enter into agreements for the long-term construction of onpost housing or the leasing of existing offpost housing. A decision technology system, called the Housing Analysis Decision Technology System (HADTS), has been developed to support the construction or leasing management process. The HADTS architecture is based on a combination of database, econometric, heuristic programming, mapping, and decision support techniques. Its deployment has enabled the Department of the Army to realize significant economic, management, and political benefits. Future enhancements, motivated by the challenges from the current system, promise to increase the power of HADTS and to further improve the Army’s ability to manage its housing assets.
BACKGROUND
The Department of the Army’s Corps of Engineers is responsible for housing personnel at, or near, division installations. For the past 20 years, the Corps’ Installation Management Office has administered the housing program. This office has a chief of housing, three functional managers, and a support staff of 10 technical specialists and five secretaries at its suburban Washington, DC (Fort Belvoir, Virginia) headquarters. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Better Army Housing Management Through IT 137
At headquarters, management plans housing policies, develops procedures to implement the plans, and then communicates the procedures to the housing managers at each Army installation. Such policies, procedures, and actions are audited by Department of Defense, Government Accounting Office, and other government agencies for compliance with existing laws, regulations, and guidelines. Since audit reports can significantly influence available funding, the Army typically is very responsive to auditor suggestions on housing management policy and practice. Installation housing managers collect data pertinent to the planning process, communicate the data through various information systems to headquarters, implement headquarters-developed procedures, and administer onpost assets. Traditionally, these installation managers have been given much discretion in exercising their responsibilities. Moreover, headquarters has relied heavily on installation managers’ input in formulating housing policies, procedures, and practices. Figure 1 gives the organizational chart relevant to Army housing management. Currently, managers in this organization control $55 billion worth of onpost housing assets. The annual budget is $12 million for managing these assets and the associated housing programs.
SETTING THE STAGE
At any Army installation, the projected supply of available government housing may be insufficient to meet the personnel demand expected at the site. Policy requires unaccommodated personnel to seek acceptable private rentals in the installation’s predefined Housing Market Area (HMA). If the expected stock of private rentals in the HMA will be insufficient to eliminate the onpost housing deficit, the Department of the Army will enter into agreements for the construction of onpost housing or the leasing of existing offpost housing. Government policy and regulations require the Army to economically justify any leasing request with a Segmented Housing Market Analysis (SHMA).
SHMA Process
During a SHMA review, installation housing managers first compute the onpost deficit and forecast the private rental stock available to meet military housing needs. Next, they estimate the military’s market share of the private stock and compute the number of adequate rental dwelling units available in the local market to offset any onpost deficit. The result is the gross military deficit, or the number of personnel that do not have adequate housing onpost or in the private market (Forgionne, 1992). The gross military deficit is reported by bedroom count (BC) for personnel in each of the 21 Army grades (ranks). There is a separate (grade by bedroom count or 21 x 6 = 126) matrix for unaccompanied (called UPH) and family (denoted AFH) personnel. Some cells in the housing deficit matrixes may show surpluses. In the interest of minimizing construction or leasing, Army policy is to offset deficits in other parts of the matrixes with these surpluses. Offsetting results in a final housing deficit, and this deficit becomes the basis for making construction or leasing requests.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
138 Forgionne
Figure 1. Army housing management organization D of Defense (D) D
Department of the Army
Corps of Engineers
Installation ManagementDivision
Chief of Housing
Secretaries
Data Management
Operations Management
Policy Analysis
Installation Managers
Government Auditors
Data Management
Much of the relevant onpost data needed for the SHMA process are captured, stored, and can be retrieved through Army information reporting systems. However, the onpost data were not organized into the variables needed to perform the SHMA process. Required offpost data originally were collected, captured, and recorded manually, and in an often sketchy manner, during the SHMA process. Typical offpost data sources included banks, local realty boards, public utility commission reports and statistical abstracts, state statistical abstracts, and vendors of local housing market statistics. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Better Army Housing Management Through IT 139
The original SHMA review was supported by a spreadsheet computer program that received pertinent data, helped managers perform the offsetting computations, and reported the deficit results. There was little (if any) sharing of information between the Army information reporting systems and this spreadsheet program.
Report Writing
The spreadsheet used to support the SHMA process generated a series of reports on market conditions and on projected Army housing deficits for the specified installation. Deficits were reported by grade and by the groups of grades needed to conform with the housing construction and leasing categories specified by Department of Defense policies (Forgionne, 1991b). The original reports, however, did not give a detailed breakdown of the offsetting operations and computations. Such a categorization would: (a) be useful for Army officials seeking to evaluate alternative housing deficit reduction policies and (b) provide the policy implication information communicated by these officials to installation managers. While the original SHMA process projected the number of private rentals available to reduce onpost housing deficits, the system did not locate the rentals. The original process also did not display characteristics about the available rental properties. Such spatial and attribute data would greatly enhance the Army housing managers’ ability to implement leasing directives.
Auditor Concerns
In the Army’s original approach, the prices and quantities of rentals were determined separately. Government auditors noted that, in this approach, prices will not change to eliminate any imbalances between demand and supply in the private market. Consequently, the Army’s original approach can generate inaccurate projected quantities for some market segments. As noted by government auditors, the potential difficulty could be avoided by determining the prices and quantities that equate demand and supply (clear the market) for private rental housing in pertinent market segments. These market-clearing quantities represent the number of rentals that will be available to consumers (including military personnel) at market-clearing prices in the long run on average. Auditors also were dissatisfied with the Army’s estimation of market share — the ratio of personnel renting offpost to total private rental stock. The original SHMA process assigned a single subjectively projected Army share to all market segments and estimated the market-clearing quantities independently of this share. In practice, the Army’s share will be determined in conjunction with (rather than separately from) the market-clearing quantity and rent. Military personnel in distinct market segments (bedroom counts) likely will obtain different shares of the market-clearing quantities.
Technical Complexity
The complexity of the original SHMA process required installation housing managers to complete an ardent five-day training program that: (a) acclimated participants with the extensive SHMA documentation, (b) illustrated the complex and labor-intensive SHMA operations and computations, and (c) demonstrated proper documentation Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
140 Forgionne
procedures. After this training, the managers still had trouble completing the analysis accurately, and they complained constantly about the complexity of the original SHMA process. The technical complexities created many expenses. Training involved contractor fees, installation managers’ opportunity costs, and other expenses that totaled approximately $250,000 per five-day program. On average, another $50,000 was spent in performing the data gathering, economic analyses, offsetting computations, and reporting and documentation activities involved in the actual SHMA review.
Project Description
The technical, management, and organizational difficulties induced senior Army officials to seek ways of: (a) improving the SHMA’s economic analyses and (b) simplifying and automating key segments of the SHMA process. These officials sought support for the econometric analyses mandated by the government auditors, the complex computations involved in the SHMA, and the report writing necessitated by Department of Defense policies. A decision technology system, called the Housing Analysis Decision Technology System (HADTS), was developed to provide the desired support (Forgionne, 1991a). This system was developed iteratively, using the Adaptive Design Strategy (ADS), by two researchers working in conjunction with affected Army executives (senior housing managers).
HADTS Components
HADTS integrates the functions of a Geographic Information System (GIS), an Executive Information System (EIS), and a decision support system called the Housing Analysis System (HANS). The GIS is delivered through ATLAS/GIS software, while the EIS and HANS are delivered through the SAS System for Information Delivery. Figure 2 shows the relationships between these HADTS components. As Figure 2 illustrates, the GIS extracts Census data, creates Housing Market Area (HMA) maps, and displays user-specified market conditions on the maps. The EIS extracts HMA market conditions from the GIS and installation housing characteristics from Army information systems, captures the extracted data, and forms the database needed to perform the HANS analyses and evaluations. HANS utilizes econometric and heuristic programming models to compute the results displayed on the housing deficit reports.
HADTS Architecture
To conserve resources and to meet the needs of the Department of the Army’s personnel, HADTS is made available through an easy-to-use computer system that can be readily used at headquarters, the major Army commands (MACOMs), or the military installations by nontechnical persons. Figure 3 gives a conceptual architecture of HADTS.
Inputs
HADTS has a data base that captures and stores spatial and attribute data for the HMAs and relevant onpost data. Spatial data includes longitude and latitude coordiCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Better Army Housing Management Through IT 141
Figure 2. Housing Analysis Decision Technology System (HADTS)
Housing Deficit Reports
HANS
HANS Database
Census Data
EIS
GIS
Army Systems
HMA Maps
nates that are used to draw the HMA maps and features on the maps, including city and census tract boundaries, bodies of water, highways, streets, the installation’s location, and the location of available rental properties. Attribute data consists of: (a) the socioeconomic variables needed to perform HANS’s deficit analyses and (b) housing characteristics of interest to Army housing managers. The socio-economic variables include the HMA’s total population, land area, average population age, average years of schooling, median house value, average travel time to work, median household income, total precollege school enrollment, average Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
142 Forgionne
family size, the number of males, total housing stock, and vacancy rate. Housing characteristics include median rents and rental housing quantities categorized by bedroom count. Relevant onpost data consists of the elements needed to estimate housing requirements, government owned and controlled assets, personnel renting offpost, installation populations, and effective military demand dollars (housing allowances). There is also a model base that contains statistical procedures, location formulas, data conversion rules, the upgraded econometric model, and the upgraded deficit reduction heuristics. The statistical procedures are used to categorize attribute data within the HMAs and to calculate summary statistics for the economic variables and housing characteristics within the HMAs. Location formulas, proprietary within ATLAS/GIS, are used to convert U.S. Census Bureau TIGER (Topologically Integrated Geographic Encoding and Referencing) degrees into HMAs. Data Conversion Formulas. Data from existing Army information systems are not in the format needed to perform HADTS’s analyses and evaluations. Predefined rules are used to convert the raw Army data into the needed HADTS formats. One set of formulas converts the extracted socio-economic and military characteristic data into the variables needed for HANS’s analyses, evaluations, and reports. Some of these variables become inputs into HANS’s econometric analyses, and such analyses output (project) available private (offpost) housing by grade and bedroom count. A second set of formulas transforms the remaining variables into projected housing requirements and government owned and controlled (onpost) housing by grade and bedroom count. Econometric Model. HANS’s econometric model includes quantity and market share components. The quantity component uses equations formed by senior Army housing managers’ judgment, economic housing theory (Blackley & Ondrich, 1988; Goodman, 1988; Turnbull, 1989), and regression analysis to forecast market-clearing supplies and rents for rental housing by bedroom count in the installation’s HMA. Additional theory (Carruthers, 1989; Kaplan & Berman, 1988; Turnbull, 1988), judgment, and statistical methodologies developed the market share equations that forecast the Army shares of the market-clearing rental quantities. Multiplying the estimated market-clearing supplies by the predicted market shares gives the offpost rentals that will be available to Army personnel by category (unaccompanied or family), grade, and bedroom count. Army requirements less government owned and controlled assets less available offpost rentals give the unreconciled deficits/ surpluses that can be anticipated by grade for each BC. Deficit Reduction Heuristics. A heuristic programming model automatically reassigns (offsets) deficits and surpluses among BC and grades in accordance with the latest DOD policies, rules, and regulations, first using available offpost rentals and then using government owned and controlled assets. Such computer-assisted assignments are designed to improve decision making for this complex managerial problem (Adelman, 1992; Benbasat & Nault, 1990; Silver, 1991).
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Better Army Housing Management Through IT 143
Processing
The decision maker (a military housing executive or staff management assistant) uses computer technology to perform housing analyses and evaluations with HADTS’s EIS, GIS, and HANS components. Currently, the system executes on an IBM-compatible 486 microcomputer with 8MB of RAM, a color graphics display, and a printer compatible with the microcomputer. It runs the SAS information delivery system and the ATLAS/ GIS geographic information system through the OS/2 operating system. This configuration was selected because it offered a more consistent, less time-consuming, less costly, and more flexible development and implementation environment than the available alternatives. By double clicking the HADTS icon on the OS/2 desktop, the user accesses a display with a welcome message, instructions, and a push-button link to the EIS. The EIS acts as a front-end to HADTS’s database management system (DBMS), GIS, and HANS (Dadam & Linnemann, 1989; Targowski, 1990; Wang & Walker, 1989). Once in the EIS, the user with the correct password can interactively identify from screen icons in sequence the installation’s Major Army Command (MACOM), specific installation (fort), and the current year. These operations automatically subset the HADTS database, provide the data needed for further processing, and access the HADTS processing display. From this display, the user can select the type of processing desired — GIS (maps of the HMA) or HANS (inputs and reports). Like many geographic information systems, HADTS’s GIS organizes the collected spatial and attribute data (in vector format), captures the data, and stores the key offpost variables as a dBASE IV-based, DBF-formatted, relational database (Bruno, 1992; Fischer & Nijkamp; 1993, Franklin, 1992; Grupe, 1992a, 1992b; Huxhold, 1991). HADTS’s GIS then structures the HMA maps, locates available rental housing on the HMA maps, and simulates socio-economic variables and housing characteristics within the HMAs. The system also enables the housing manager to interactively modify the HMA, display tabular statistical reports that summarize housing characteristics in the HMA, and print hard copies of the HMA maps and summary statistics. By using the DBMS, the user can extract HMA market conditions from the GIS and installation housing characteristics from Army information systems, display the data, modify the displayed data, and store the SHMA-relevant information. The HANS component then utilizes the DBMS-generated onpost and offpost data to automatically forecast market conditions and Army housing supplies from the upgraded econometric model, perform upgraded deficit reduction heuristics, and generate detailed reports of the results automatically without human (manual) intervention. As indicated by the top feedback loop in Figure 3, offpost and onpost data, reports, and maps created during HADTS’s analyses and evaluations can be captured and stored as inputs for future processing. These captured inputs are stored as additional or revised fields and records, thereby updating the data base dynamically. The user executes the functions with mouse-controlled point-and-click operations on attractive visual displays that make the computer processing virtually invisible (transparent) to the user.
Outputs
Processing automatically generates visual displays of the outputs desired by housing managers (Turban, 1993). Outputs include HMA maps and associated deficit Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
144 Forgionne
forecasts and reports. The maps define the boundaries of the HMA, give the road and street patterns, identify important landmarks, locate the military installation within the HMA, and highlight the locations of rental properties on the HMA roads and streets. Deficit forecasts project the corresponding market conditions and their effects on Army housing deficits/surpluses. The results are displayed as a series of summary and detailed housing reports. A summary report gives the final military deficit by grade group and bedroom count for the selected personnel category. Detail reports give summaries of the explanatory computations that justify the summary report. These intermediate descriptions list the HMA market-clearing rents and rental quantities by bedroom count and deficit computations by bedroom count and grade. Deficit computation reports include the offpost rentals that will go to Army personnel, Army housing requirements, available onpost housing, net deficits before reassignments, net deficits after reassignments, and the distribution of reassignments. As indicated by the bottom feedback loop in Figure 3, the user can utilize the outputs to guide further HADTS processing before exiting the system (Sengupta & Abdel-Hamid, 1993; Watson, Rainer, & Houdeshel, 1992). Typically, the feedback will involve sensitivity analyses in which the user modifies the HMA boundaries and observes the effects on market conditions or the user adjusts onpost variables and observes the effects on housing deficits.
CURRENT STATUS OF THE PROJECT
To ensure that the information system accurately replicated the inputs, HADTS’s data conversion rules were tested against historical data for existing Army installations. In the testing, housing statistics displayed from the system were compared with the corresponding actual values. According to the results, HADTS reproduced the actual data exactly. HADTS’s econometric model was tested against Census data from the counties surrounding existing Army installations. In the testing, projected socio-economic variables from the quantity block were compared with the corresponding actual Census values. According to the results, the estimated equations predicted between 81.52% to 95.84% of the variance in the supply (quantity) data and between 42.77% to 80.92% of the variance in the rent data. Root mean squared (RMS) error percentages ranged between 0% to 14.5572%, with most values less than 5%, from the bedroom-count supply and rent equations. Also, projected variables from the market share block were compared with the corresponding actual Army records. According to the results, the estimated equations predicted between 0.08% to 99.96% of the variance in the market share data. Root mean squared RMS errors ranged from .00003 to .0334 for all market share equations. Hypothetical, but realistic, data on housing requirements, onpost (Army) assets, and private assets were used to test the final set of upgraded deficit reduction heuristics. In the testing, model-computed surpluses and deficits were compared to the values expected at each stage of the offsetting process. This testing revealed that the heuristic programming model always generated the correct surpluses and deficits at all offsetting stages. Based on the test results, the Department of the Army decided to implement HADTS. The system has been in use for over a year. Results from the implementation Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Better Army Housing Management Through IT 145
indicate that the decision technology system will have significant economic and management benefits, offer important lessons, and present key challenges for Army housing management.
SUCCESSES AND FAILURES
HADTS simplifies and automates the SHMA process, reduces training requirements to a minimum, decreases the volume of documentation, and increases computer processing efficiency. These substantial improvements have saved, and will continue to save, the Army approximately $5,860,000 per year in SHMA implementation, data management, and lease location costs and $1,024,200,000 in budgeted construction expenses. The Army spent about $250,000 to develop and implement the HADTS system that provides these economic benefits. Such economic gains are quite timely in light of federal budget restrictions and the Army’s current direction of large scale force reductions and base closures. The HADTS system also provides management benefits. In the original SHMA process and in the manual search for lease locations, the nearly exclusive reliance on tedious manual procedures often resulted in inaccurate, incomplete, and redundant data collection. HADTS provides: (a) quicker analyses of the rental housing market and its impact on Army housing, (b) operationally and computationally error-free SHMA and lease-location processes, (c) more timely policy analyses and evaluations, (d) rapid sensitivity analyses of HMA boundary, market condition, and policy changes, (e) efficient flagging of data and information deficiencies, and (f) more effective evaluation of field-generated housing requests. These enhanced capabilities will enable Army officials to more efficiently and effectively manage the $55 billion in housing assets under their control. The project identified data sharing, model management, and mapping limitations that present profound challenges to Army housing management. At the present time, there is limited data available to estimate the equations in the market share segment of the econometric model. These data limitations precluded a meticulous application of theoretical market share housing models and resulted in deficit forecasting anomalies for some Army installations. Needed data exist in other military information systems, and computer programs have been written and embedded within HADTS to perform the database updating when the appropriate government agencies work out technical data sharing arrangements. As new data becomes available, variables may be added or deleted from, and parameters may change in, the original formulations (Billman & Courtney, 1993; Cook, 1993; West & Courtney, 1993). The typical HADTS user will not have the technical expertise to perform the consequent econometric analyses needed to update the system without considerable assistance. Senior Army officials have commissioned another project that will embellish HADTS to perform the updating tasks automatically without human (manual) intervention. Maps are displayed and the corresponding offpost housing data are generated through ATLAS/GIS, while the offpost data are utilized within SAS to create deficit reports. Using two different software tools (SAS and ATLAS/GIS) reduces computer processing efficiency and increases system maintenance requirements. These difficulCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
146 Forgionne
ties can be alleviated by replacing the ATLAS/GIS tool with SAS/GIS when it becomes available.
EPILOGUE AND LESSONS LEARNED
Lessons were learned (and are still being learned) from HADTS implementation and development. Improved accuracy in housing deficit projections is important to the Army and society. Underestimating deficits can leave Army personnel inadequately housed, lower soldier morale and family well-being, and jeopardize military preparedness. Unnecessary housing construction can waste scarce natural resources and, in the process, alienate local residents, environmentalists, and other interest groups. Reducing housing deficits can help the Department of the Army avoid these undesirable consequences. Since the original spreadsheet model had a limited quantity, and no market share component, the housing manager was left with an incomplete understanding of the data requirements needed for the SHMA process. As a result, managers often collected and captured irrelevant and redundant data. The upgraded econometric model identifies all offpost and onpost data relevant to the housing forecast process, and the HADTS system provides a mechanism that facilitates data entry while reducing errors and providing accurate, reliable, and consistent data. Partially because of government auditors’ concerns, there has been significant movement within the armed services to standardize the processes of requesting and locating rental housing. Top-level policy makers realize that all the services use similar processes, and HADTS’s success may convince them that the processes can be substantially enhanced with decision technology support. Consequently, the Army’s HADTS-supported SHMA and lease location processes can be offered as the standards for all the armed services.
REFERENCES
Adelman, L. (1992). Evaluating decision support and expert systems. New York: Wiley. Benbasat, I., & Nault, B. R. (1990). An evaluation of empirical research in managerial support systems. Decision Support Systems, 6(1), 203-226. Billman, B., & Courtney, J. F. (1993). Automated discovery in managerial problem formulation: Formation of causal hypotheses for cognitive mapping. Decision Sciences, 24(1), 23-41. Blackley, P., & Ondrich, J. (1988). A limiting joint-choice model for discrete and continuous housing characteristics. The Review of Economics and Statistics, 70(2), 266274. Bruno, L. (1992). GIS maps out ways to access data in RDBMSes. Open Systems Today, 12(11), 50-54. Carruthers, D. T. (1989). Housing market models and the regional housing system. Urban Studies, 26(3), 214-222. Cook, G. J. (1993). An empirical investigation of information search strategies with implications for decision support systems. Decision Sciences, 24(3), 683-697.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Better Army Housing Management Through IT 147
Dadam, P., & Linnemann, G. (1989). Advanced information management (AIM): Advanced database technology for integrated applications. IBM Systems Journal, 28(4), 661-681. Fischer, M. M., & Nijkamp, P. (Eds.). (1993). Geographic information systems, spatial modeling, and policy evaluation. New York: Springer-Verlag. Forgionne, G. A. (1991a). Decision technology systems: A step toward complete decision support. Journal of Information Systems Management, 8(4), 34-43. Forgionne, G. A. (1991b). HANS: A decision support system for military housing managers. Interfaces, 21(6), 37-51. Forgionne, G. A. (1992). Projecting military housing needs with a decision support system. Systems Research, 9(2), 65-84. Franklin, C. (1992). An introduction to geographic information systems: Linking maps to databases. Database, 15(4), 12-15. Goodman, A. C. (1988). An econometric model of housing price, permanent income, tenure choice, and housing demand. Journal of Urban Economics, 23(5), 327-353. Grupe, F. H. (1992a). A GIS for county planning: Optimizing the use of government data. Information Systems Management, 9(2), 38-44. Grupe, F. H. (1992b). Can a geographic information system give your business its competitive edge? Information Strategy, 8(3), 41-48. Huxhold, W. E. (1991). An introduction to urban geographic information systems. Oxford: Oxford University Press. Kaplan, E. H., & Berman, O. (1988). OR hits the heights: Relocation planning at the Orient Heights housing projects. Interfaces, 18(6), 14-22. Sengupta, K., & Abdel-Hamid, T. K. (1993). Alternative conceptions of feedback in dynamic decision environments: An empirical investigation. Management Science, 39(4), 411-428. Silver, M. (1991). Decisional guidance for computer-based decision support. MIS Quarterly, 18(3), 105-122. Targowski, A. (1990). The architecture and planning of enterprise-wide information management systems. Harrisburg: Idea Group Publishing. Turban, E. (1993). Decision support and expert systems: Management support systems (3rd ed.). New York: Macmillan Publishing Company. Turnbull, G. K. (1988). Market structure, location rents, and the land development process. Journal of Urban Economics, 23(2), 261-277. Turnbull, G. K. (1989). Household behavior in a monocentric urban area with a public sector. Journal of Urban Economics, 25(6), 103-115. Wang, M., & Walker, H. (1989). Creation of an intelligent process planning system within the relational DBMS software environment. Computers in Industry, 13(3), 215-228. Watson, H. J., Rainer, R. K., & Houdeshel, G. (Eds.). (1992). Executive information systems: Emergence, development, impact. New York: Wiley. West, L. A., & Courtney, J. F. (1993). The information problems in organizations: A research model for the value of information and information systems. Decision Sciences, 24(2), 229-251.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
148 Forgionne
Appendix. Questions for Discussion 1. 2. 3. 4.
5.
Why did the Army develop HADTS in an iterative manner, using the ADS rather than the traditional system development life cycle approach? Be specific about the user, organization, and other technical factors that lead to this strategy choice. What decision support principles have been applied in the development and implementation of HADTS? Be specific about the dialog, model, and data management characteristics of the system. Why is the Army’s system referred to as a decision technology, rather than decision support, system? Be specific about the phases of decision making being supported and how HADTS delivers the required support. What future internal information technology management challenges will be created by the deployment of HADTS? Be specific about the effect of the system on organizational culture, the span of management control, and organizational structure. Will there be any political challenges for the Army and other armed services created by the deployment of HADTS? Be specific about funding, interservice rivalry, and auditing requirements.
Guisseppi A. Forgionne is professor of information systems at the University of Maryland Baltimore County (UMBC). Professor Forgionne holds a BS in commerce and finance, an MA in econometrics, and an MBA and a PhD in management science and econometrics. He has published 20 books and approximately 100 research articles and consulted for a variety of public and private organizations on decision support systems theory and applications. Dr. Forgionne also has served as department chair at UMBC, Mount Vernon College, and Cal Poly Pomona. He has received several national and international awards for his work.
This case was previously published in J. Liebowitz & M. Khosrow-Pour (Eds.), Cases on Information Technology Management in Modern Organizations, pp. 19-31, © 1997.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Success in Business-to-Business E-Commerce
149
Chapter IX
Success in Business-toBusiness E-Commerce:
Cisco New Zealand’s Experience Pauline Ratnasingam University of Vermont, USA
EXECUTIVE SUMMARY
The growth of business-to-business e-commerce has highlighted the importance of computer and communications technologies and trading partner trust for the development and maintenance of business relationships. Cisco Systems Incorporation, an international company, is now the second largest company in the world, behind Microsoft. Its solid financial performance is partly due to its early focus on the Internet as a channel to cut administrative costs, and boost customer service satisfaction. Cisco International provides end-to-end networking solutions which customers use to build a unified information infrastructure of their own, or to connect to someone else’s network. The end-to-end networking solutions provide a common architecture that delivers consistent network services to all users (Cisco Fact Sheet, 2000). Cisco network solutions connect people, computing devices and computer networks, allowing trading partners to access or transfer information without regard to differences in time, place or type of computer systems. By using networked applications over the Internet on its own internal networks, Cisco globally is gaining contributions of at least NZ$825 million a year in operating cost savings and revenue enhancements (Cisco Newsroom, 2001). Cisco is today the world’s largest Internet commerce site and sees financial benefits of nearly US$1.4 billion a year, while improving customer/partner satisfaction and gaining a competitive advantage in areas such as customer support, product ordering and delivery times (Cisco Fact Sheet, 2000). Cisco International serves customers in three large markets, namely: 1.
Enterprises including large organizations with complex networking needs, usually spanning multiple locations and types of computer systems. Thus enter-
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
150 Ratnasingam
2. 3.
prise customers include corporations, government agencies, utilities and educational institutions. Service providers include companies that provide information services, including telecommunication carriers, Internet service providers, cable companies and wireless communication providers. Commercial companies with a need for data networks of their own, as well as connection to the Internet and/or to business partners.
Cisco International (Cisco’s headquarters) in San Jose, California, USA, has well over 225 sales and support offices in 75 countries. Cisco International wants New Zealand businesses to embrace the Internet and use it to be more efficient. The company worked with the NZ government on its e-commerce implementation plans at the summit held in late 2000. One of the aims of this forum was to encourage small and medium enterprises (SMEs in NZ) to go online. Cisco NZ receives direction from its headquarters in San Jose, which monitors a global networked business model. A global networked business model includes an enterprise, of any size, that strategically uses information and communications to build networks of strong, interactive relationships with all its key constituencies. The global networked business model leverages the network for competitive advantage by opening up corporate information to all key-trading partners and employs a self-help model of information access, which is more efficient and responsive than the traditional model. The traditional model consists of few information gatekeepers dispensing data as they see fit. The global networked business model is based on three core assumptions: 1. 2. 3.
The relationships an organization maintains with its key constituencies can be as much of a competitive differentiator as its core products or services. The manner in which a company shares information and systems is a critical element in the strength of its relationships. Being “connected” is no longer adequate. Business relationships and communications that support them must exist in a “networked” fabric. Hence, by simplifying network infrastructures and deploying a unifying software fabric that supports end-to-end network services, organizations are learning how to automate the fundamental ways they work together.
Cisco NZ claims that the success of e-commerce depends on well-planned partnerships, mutual goals and trust. Cisco NZ’s philosophy is to listen to their trading partners’ requests, monitor all technological alternatives and provide customers with a range of options from which to choose. Thus, Cisco’s experience in e-business has set the standard for e-commerce transformation and creating Internet solutions. This teaching case focuses on Cisco’s experience with their trading partner, Compaq NZ, and the findings contribute to strategies on how businesses can succeed in e-commerce participation.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Success in Business-to-Business E-Commerce
151
SETTING THE STAGE
Cisco International operates in one of the most profitable commerce sites in the world with $7 billion transactions annually, approximately 20% of total global ecommerce revenues. Its Internet commerce applications have yielded more than $30 million annually in cost savings to the company (Cisco Fact Sheet, 2000). Lack of the required skills, expertise and knowledge of the full potential of ecommerce applications has created a situation of lack of trust. Consequently barriers to participation in e-commerce activities arise due to uncertainties inherent in the current e-commerce environment. These, uncertainties, in turn, create a perception of increased risk thereby inhibiting the tendency to participate in e-commerce particularly from top management. Uncertainties reduce confidence both in the reliability of business-tobusiness transactions transmitted electronically and, more importantly, in the trading parties themselves (Hart & Saunders, 1997). Furthermore, the increased complexity of today’s networking equipment requires trading partners to focus and develop an expertise around a specific technology (Cisco Newsroom, 2001). Despite the assurances of technological security mechanisms, businesses in New Zealand perceive that ecommerce transactions may be both insecure and unreliable. Similarly, preliminary research suggests that a perceived lack of trust in e-commerce transactions sent by trading parties using the Internet could be a possible reason for this slow adoption rate (Keen, 1999). Cisco NZ claims that many of these companies are not using the Internet to its potential as they are frightened by what it means, and the potential costs that are involved. Hence, Cisco International (that is Cisco’s headquarters in San Jose, California, USA) used relevant New Zealand case studies to show how businesses can use the Internet for competitive advantage. For example, ASB Bank e-solutions — a virtual joint venture between Telecom, EDS and Microsoft, and Xtra — present a set of case studies as part of the program (Info-Tech, 2000). In 1996, Cisco International embarked on an ambitious campaign to bring its largest customers into fold, and in the process introduced Cisco Connection Online (CCO), which is part of the global networked business model to Cisco in New Zealand. One of CCO’s biggest accomplishments may be the way it has taken Cisco NZ trading partners. Employees within Cisco NZ have access to information and tools that allow them to do their jobs more proficiently, and prospects have ready access to information that aids in purchasing decisions. Trading partners have ready access to a variety of information and interactive applications that help them sell more effectively. Cisco NZ’s gold trading partner Compaq NZ would rather have an automated way to tie their legacy purchasing or sales automation systems to Cisco NZ than to have their purchasing agents use Compaq NZ’s Web site manually. Hence, the global networked business is an open, collaborative environment that transcends the traditional barriers to business relationships and between geographies, allowing diverse constituents to access information, resources and services in ways that work best for them. Hence, Cisco International is not only the worldwide leader in networking, having supplied over 80% of the Internet backbone equipment, but is also a leading example of global networked business, leveraging its IT and network investments. In addition integrating them with core business systems, operational information
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
152 Ratnasingam
to better support its prospects, customers, partners, suppliers and employees (Cisco Fact Sheet, 2000).
CASE BACKGROUND AND DESCRIPTION
Cisco International moved 76% of its orders online, which is equivalent to $28.1 million daily (Cisco Fact Sheet, 2000). It is of course difficult to separate how much of this gain can be attributed directly to better ways of doing business provided by e-commerce applications vs. the gains that have come from the growth of trading partner trust relationships. Figure 1 demonstrates Cisco’s history of growth towards e-business. In 1984 Cisco NZ made an initial start and in 1994 a Local Area Network (LAN) and a Wide Area Network (WAN) were introduced. This made them ahead of other international organizations, and by 1998, Cisco was an end-to-end solutions provider. In 1999, Cisco NZ implemented a single network architecture for data, voice and video. One main reason for this success is top management commitment that provided the encouragement and financial resources. Cisco NZ claims that it is important to develop an e-commerce strategy, which complements a corporate strategy. By viewing Internet commerce as a strategic business tool, organizations can support their overall business objectives and profitability goals. Cisco NZ’s e-commerce extranet application called “Cisco Connection Online” (CCO) was implemented at Cisco International, San Jose, California, USA. Cisco Connection Online provides direct access to manufacturing systems, so that channel partners (trading partners) can track inventories, engineering changes, shipment status and other information in near real time. Cisco Connection Online also provides access to product and marketing information, software downloads and sales tools. The process involves detecting faults and allowing Cisco’s trading partners’ to download product, equipment and pricing information. By undergoing an automated checking mechanism, 80% of the fault activities were reported on the Web. This online ordering application (CCO) Figure 1. Cisco’s growth in e-business (Cisco’s sales presentation) Cisco’s Cisco’s History History
1994
1998
1990 1986
1984 A Stanford Start
1st Router Shipped: The AGS
2 Employees *Fiscal *Fiscal Years Years F0_4665_c1
LAN Switching Remote Access WAN Switching Introduced
IPO Adjusted 12¢ per share
© 1999, Cisco Systems, Inc.
23 Acquisitions Completed End-to-End Solutions Provider
1999 $12 Billion 34 Completed Acquisitions Single Network Architecture for Data, Voice and Video
20,000 Employees
2
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Success in Business-to-Business E-Commerce
153
dramatically reduced the costs of sales, distribution, marketing and administration, thereby contributing to savings in administrative costs, telephone calls and delays in responding to queries. The primary business transactions include purchase order of equipment, delivery and product information. Eighty percent of the nontechnical support questions were answered online through CCO convenient, self-service applications. As a result of this, customer satisfaction increased, as they have immediate round-the-clock access to richer, more precise service and support information (Cisco, 2000). Other secondary elements of CCO included ordering of equipment, delivery and ability to check lead track time. Cisco NZ’s trading partners have shown an ability to be competent channel system integrators using Cisco computer and communication equipment thus contributing to competence trading partner trust. Figure 2 demonstrates the functions and processes embedded within Cisco Connection Online. The diagram exhibits an online ordering process. Trading partners can check on the pricing and configuration of the order even before placing the actual order. Thus, improved order accuracy from interactive, Web-based applications with built-in rules and access to current pricing, product specifications, selection/configurations and other information ensure the submission of complete and accurate orders. A more efficient and accurate online ordering tool reduces much of the traditional cycle time for handling requisitions and purchase orders, thereby decreasing delivery times to customers and business partners. Cisco NZ claims that 70-80% of Cisco’s business involves e-commerce. The annual monetary value from e-commerce transactions is NZ$17-34 billion dollars (US$8-
Figure 2. Supporting tools and infrastructure of Cisco Connection Online (Cisco sales presentation)
Cisco’s Full-Service Internet Commerce Implementation Lead Times Pricing Configuration Order Placement Order Status Service Order Invoice
BEFORE YOU ORDER • Pricing Tool • Configuration Tool • Lead Times Tool
SERVICE AND WARRANTY • Service Contract Tool • Service Order Submit Tool • RMA/Service Order Status Tool
ORDER • Ordering Tool/IPC (Internetworking Product Center)
• • • • •
AFTER YOU ORDER • Order Status Tool • Invoice Tool • Online Order Extract Tool • Aged Account Summary Tool INFRASTRUCTURE ITEMS • Entitlement/Security • Secure Transport Architecture • Globalization • Data Mirroring • Data Cleansing • Partner Initiated Customer Access
RMA/Service Order Parts Tool Service Contract Center Tools Returns Tool Return Status Tool Product Upgrade Tool
OTHER COMMERCE TOOLS • E-mail Notification Tool • Billing Address Change Tool Core Corba Framework Auto Registration Customer Service Survey Internet Commerce FAQs Bug Enhancement Request Tool • Communications Architecture • • • • •
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
154 Ratnasingam
16 billion dollars) contributed from 2.5 million e-commerce transactions per annum. Cisco NZ perceives its organization to engage in long-term business investments with its trading partners (Cisco Fact Sheet, 2000). Figure 2 shows the infrastructure behind Cisco Connection Online. For example ordering applications is only one part of the CCO. There is a vast portfolio of customer service and technical support applications that include a software download tool kit and various troubleshooting engines that allow customers to enter — in plain language — problem descriptions into a database. CCO is Cisco NZ industry-leading online support and information service, available 24 hours a day, seven days a week. CCO provides its trading partners with a wealth of up-to-date information, with hundreds of new documents being added or updated each month. This service is the basis of Cisco’s philosophy of moving beyond traditional business barriers that aim to: • • • •
Make all of Cisco’s information, services and support available to its global customers, partners and employees, on demand; Deliver faster problem response; Improve user productivity; and Significantly lower the cost of doing business. Thus, implementing CCO can quickly lead to the following compelling benefits:
• • • •
Increased revenues; Increased customer and employee satisfaction; Reduced operating costs; and Improved productivity of employees, customers and channel partners.
CCO provides Internet-based technical support, by resolving 200,000 telephone calls globally per day without any human intervention. This saves Cisco International $2 million each month. The most significant elements of CCO are the online trading partner support service and networked commerce. The training and support section provides trading partners with online self-help guided assistance. Registered trading partners can log on anytime to access various tools from the company’s databases — from intricate details about a particular product or networked environment to fix bugs and update software. A key feature of CCO is its tight and secure integration with Cisco’s intranet, which spans 150 locations worldwide. For example in the software library section, customers can find the upgrades and utilities they need, encountering over 16,000 downloads per week. In addition there are innovative tools that help customers to locate the exact information, fixes or troubleshooting tips that they are looking for. Almost half of all trading partners’ queries in the U.S., including those from companies like GE and Sprint, are now handled through the Cisco International Web site which is connected via intranets to Cisco NZ. This helps Cisco NZ to avoid backlogging of telephone-based support calls. According to Cisco NZ, 60 to 70% of all inquiries that come into CCO result in users finding an answer. Cisco NZ’s participant indicated that “without CCO, my staff would have to about three times larger to handle the same workload”. The Internetworking Products Center (IPC) is only accessible to Cisco’s registered direct customers and channel partners, and takes the difficulties out of Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Success in Business-to-Business E-Commerce
155
ordering configurable products by providing an intuitive, easy-to-understand interface. These applications provide order status information, pricing and configuration details. The customers can detail the purchase order numbers, order date, expected shipment date and shipping carrier without calling Cisco’s NZ office. They can even link up directly with the Federal Express parcel tracking Web site from within CCO to find out exactly where their order is; whether it is in the warehouse, on the truck, plane or receiving dock. This type of precise order tracking prevents all kinds of possible billing and shipping problems and provides accurate proof of delivery. It makes communication with Cisco NZ and their trading partners clearer, thereby building trading partner satisfaction and improving trading partner trust relationships. Cisco NZ is a small-medium-sized organization with 20 employees in the Wellington branch. Their reach is international and the product line is data and communication. Cisco International has a sales volume of US$480 billion, with 25,000 employees worldwide, but Cisco NZ, established seven years, has only 20 employees. Compaq NZ is a large company with 300 employees and their reach is both national and global. They are one of the channel partners (that is the buyer) of computer equipment and data communication parts from Cisco NZ (who is the supplier). Compaq has five branches in New Zealand — one in Wellington, Auckland, Christchurch, Hamilton, and Dunedin. Compaq NZ also obtains directions from its headquarters in the USA. Compaq NZ supplies computer systems, and provides computer services (that is, they are system integrators). Compaq NZ sells computers, undertakes systems integration, application software, hardware, networks, databases and develops database application systems. The network sales specialist indicated that “Compaq NZ’s main role is to manufacture computer systems, integration parts and provide computer services”. Table 1 summarizes the background information of Cisco NZ and Compaq NZ interorganizational-dyad. The next section discusses the impact of trading partner trust in business-to-business e-commerce inter-organizational relationships.
Theories of Trust in Business Relationships
Previous scholars who examined trust in business relationships have identified trust to be a key factor for successful long-term trading partner relationships (Ring & Van de Ven, 1994). For example, trust has been found to increase cooperation, thus leading to communication openness and information sharing (Cummings & Bromiley, 1996; Doney & Cannon, 1997; Morgan & Hunt, 1994; Ring & Van de Ven, 1994; Smith & Barclay, 1997). Furthermore, Granovetter (1985) suggests that the density and cohesiveness of social networks within relationships influence the evolution of trust. Table 2 demonstrates antecedent trust behaviors and characteristics from previous research that paved the way to the development of three types of trading partner trust.
Competence Trust
Competence trust emphasizes the trust in trading partners’ skills, technical knowledge and ability to operate business-to-business e-commerce applications correctly. Trading partners who demonstrate skills and ability in producing high-quality goods and services, such as timely delivery of accurate information to other trading partners, help maintain their supply chains and make strategic decisions that achieve high levels of Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
156 Ratnasingam
Table 1. Background information of Cisco NZ-Compaq NZ inter-organizational-dyad Demographic Items
Cisco Limited in NZ
Compaq Limited in NZ
Number of Participants that
4
6
Size of Organization
SMEs
Large
Number of Employees
14
300
Main Role of Organization
Supplier and manufacturer-
Buyer
responded in this study
service organization Type of Industry & product
Computer &
Computer &
line
Communications
Communications system integrators
Type of E-Commerce
Extranet-application
Logs into Cisco’s extranet
Application
developer (Cisco Connection
application (CCO)
Online-CCO) Number of years using
4 years since late 1996, but
4 years since late 1996, but
Cisco Connection Online
had prior trading partner
had prior trading partner
relationships with Compaq
relationships with Compaq
NZ
NZ
Types of business to
Purchase order of equipment
Purchase order of
business transactions
and delivery, information
equipment and delivery,
from the web sites, ordering
tracking information from
and equipment
Cisco’s web site.
20
50 but only Compaq uses
Number of Trading Partners (using CCO)
CCO
competence trust (Mayer et al., 1995). Thus, competence trust develops into an economic foundation, and perceived benefits such as savings in cost and time from accurate transfer of e-commerce messages are achieved. Alternatively a lack of competence trust may lead to additional costs, as trading partners need to spend time training and educating themselves, in addition to re-sending the same transaction correctly again. Cisco participants indicated that the competence trust of their trading partners was low. One possible explanation for this is that: “we have some trading partners who can perform competently, and some who cannot, but have to learn the painful way in terms of wasted time, and costs, as they need to re-send the same order twice...Some of our trading partners do not have the underlying fundamental knowledge to place complete and correct orders, thereby lacking the intellectual horsepower required to undertake the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Success in Business-to-Business E-Commerce
157
Table 2. Different types of trading partner trust Source
Predictability Trust Familiarity Foundation Judgement
Mayer, Davis, and Schoorman (1995) McAllister (1995)
Competence Trust Economic Foundation Character Role competence Ability Cognitive
Lewicki and Bunker (1996) Mishra (1996)
Deterrence/ Calculus Competence
à Affective Cognitive affective Knowledge Identification
Gabarro (1987)
Integrity
Reliability
Goodwill Trust Empathy Foundation Motives/ Intentions Benevolence
Openness Care Concern
ordering process, which is a complex one”. Trading partners will need to become familiar with our products, which demands an ability to configure the orders correctly and completely. Fortunately, Cisco NZ’s Internet Business Solutions Group (IBSG) located in Sydney (Australia) provides technical support and handles all queries relating to technical clarifications (Info-Tech, 2000).
Predictability Trust
Predictability trust emphasizes the trust in trading partners’ consistent behaviors that provide sufficient knowledge for other trading partners to make predictions and judgments due to past experiences (Lewicki & Bunker, 1996). McAllister (1995), suggests that we choose who to trust and under what circumstances. This choice is cognition based (interpersonal trust), thus investigating past measures of trust, such as reliability and dependability. Perceived benefits include trading partners’ satisfaction and information sharing developed from competence trust. Thus, a chain of consistent positive behaviors create a foundation of familiarity, which makes the perception of trading partners reliable, predictable and therefore trustworthy. Alternatively, opportunistic behaviors, such as imbalance of power increase the price of goods or create a demand for high-quality services. A Cisco participant stated, “predictability trust in our trading partners to be high, as over time consistent behaviors in their ability to place orders online was observed”. This contributed to two types of loyalty/trust. First, Compaq NZ end customers relied on Compaq as system integrators, and secondly, Compaq NZ became dependent on Cisco NZ’s products. By showing consistent behavior in Cisco NZ business interactions (as in providing fast responses to the queries, fixing problems and inquiries on orders and pricing), Cisco NZ was able to develop predictability trust. Trading partners could check on the prices, request for a discount if necessary and electronically obtain an estimated time of arrival before even confirming the order, thereby enabling them to make strategic decisions. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
158 Ratnasingam
Goodwill Trust
Goodwill trust emphasizes the trust in trading partners’ care, concern, honesty and benevolence that allow other trading partners to further invest in their trading partner relationships, thus leading to a foundation of empathy (Mayer et al., 1995). When reliability and dependability expectations are met, trust moves to effective foundations that include emotional bonds, such as care and concern. Goodwill trust is characterized by an increased level of cooperation, open communication, information sharing and commitment, thus leading to increased e-commerce participation. Perceived benefits, such as long-term investments and building the reputation of trading partners, are achieved from goodwill trust. Alternatively, an absence of goodwill trust may lead to termination of trading partner contracts and in some cases a bad reputation among trading partners. Cooperation determines an organization’s willingness to collaborate and coordinate its activities in an effort to help both organizations achieve their objectives, and is defined by the degree to which trading partners cooperate to reach their objectives and make their relationship a success. Cooperation among trading partners reduces conflict, increases communication and enhances trading partner satisfaction (Anderson & Narus, 1990). Cisco NZ’s trading partner contracts last between three to five years, and by engaging in long-term trading partner relationships, trading partners are able to increase their volume and dollar value of e-commerce transactions, thereby yielding high profits and achieving satisfaction. Satisfaction in turn increases Compaq NZ’s level of commitment, as they were able to realize strategic benefits. Hence, well-planned partnerships with established goals help to build trust which is central to building long-term trading partner relationships, as it reduces the need for extensive control safeguards and paper trials normally absent in e-commerce linkages (Dwyer, Schurr, & Oh, 1987; Ganesan, 1994; Morgan & Hunt, 1994). Compaq NZ was willing to share information regarding the amount of stock they require for an advanced period of time, as they were aware of the estimated arrival dates of the goods they ordered. This enabled Compaq NZ to inform their end customers, thus fulfilling their business promises and building their reputation. Hence, the cyclical process of developing trading partner trust contributed to Cisco NZ’s reputation, as Compaq NZ continued to order Cisco products. Cisco NZ’s accounting manager stated that, “Our trading partners typically tell us that doing business with us is better than doing business with our competitors” due to the high-quality services that Cisco NZ provides and the reputation that Cisco International holds. For example, the Compaq NZ e-commerce coordinator stated, “We believe that Cisco NZ staff had the ability to do their job, as they are the ‘pros’ and they know what they are doing...Their IT support people are excellent, very responsive, timely and professional”. Figure 3 demonstrates Cisco’s growth when compared to their competitors. It can be seen that Cisco International has invested in Internet business solutions since 1994, and by 1999 the gap between Cisco International and their competitors was almost $10 billion (Cisco Fact Sheet, 2000). A Compaq NZ participant defined trust as “the information Compaq NZ divulges to their trading partners must be kept confidential, and their trading partners must in turn treat them equally as other business partners”. Cisco NZ has a big commitment to make things work. Communicating with e-commerce transactions is not the issue, but what is more important is handling business manageCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Success in Business-to-Business E-Commerce
159
Figure 3. Cisco’s growth in line with its competitors (Cisco sales presentation slides)
Cisco’s Breakaway
Internet Business Solutions
Source: Bloomberg. Notes: Dates are calendar years; numbers are annualized from most recent reports.
ment issues, which are directly related to the trading partners. For example, giving someone else a bit of price information that will affect the privacy of business information is a concern. Therefore, another Compaq NZ participant indicated that “trust refers to both in meeting the operational technical needs, and more importantly the needs of the trading parties themselves”. Cisco International saves up to US$800 million in costs per year from online ordering, as there is no need to re-key the same order information that was entered by their trading partners (Cisco, 2000). Training is given on how to use the extranet application, thus leading to savings in time and costs from reduced error rates, and improved accuracy of information exchanged. For example, the error rate for Cisco NZ of 80% in 1997 has reduced to less than 15% in 2000. The provision of real-time, online tracking information to their trading partners via Cisco Connection Online applications has contributed to additional savings in time and costs. These benefits further contributed to an increase in e-commerce participation. For example, competence trust derived from efficiency benefits of Cisco Connection Online concentrated on reducing transaction costs, from speed and automation of e-commerce applications. Economic benefits over a period of time led to positive consistent behaviors from Compaq NZ thus leading to personal benefits. Personal benefits include improved customer service, product quality, satisfaction, improved productivity and profitability, as costs was no longer spent in fixing errors, thereby leading to competitive advantage and strategic benefits.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
The findings of the case studies contributed to increased awareness of the importance of trust, thereby helping organizations to design more effective strategies, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
160 Ratnasingam
trading partner agreements and partnering/relationship charters. Cisco NZ claims that it is important to build and maintain positive trading partner trust relationships, in order to increase e-commerce participation. Hence, in order to remain competitive, businesses must have a strategy for sales and support over the Internet. In today’s hypercompetitive global marketplace, the pressure is increasing from customers and shareholders to provide easy-to-use online applications, as a better way to conduct business. For most organizations, the biggest challenge is not if or when to consider an Internet commerce solution, but rather how to select the best Internet commerce strategies and tactics to develop and sustain competitive advantage. Cisco NZ’s success has contributed to the following e-commerce strategies.
Well-Planned Partnerships
Cisco NZ’s trading partners were chosen on the basis of their reputation, and by replicability as channel distributors. By contrast to many technology companies, Cisco NZs does not take a rigid approach that favors one technology over the alternatives and imposes it on their trading partners as the only answer. Cisco NZ’s philosophy is to listen to their trading partners’ requests, monitor all technological alternatives and provide trading partners with a range of options from which to choose. For Cisco’s e-commerce strategies to work, it was important for Cisco NZ to have good relationships with its suppliers, factories and the companies that deliver Cisco’s products, as well as their customers. In addition, Cisco NZ embraces the global networked business model, which aims to implement innovative tools, systems and share information with diverse company stakeholders. Thus, Cisco NZ’s shared commitment with their strategic alliances to deliver solutions and services was designed to help deliver a customer-centric, total solution approach to solve problems, exploit business opportunities and create sustainable competitive advantage for their trading partners. This shared commitment helped deliver solutions, products and services, together with applications on systems integration and best practices that made Cisco’s trading partners successful as globally networked organizations in the new economy.
Online Support
Cisco NZ has investments in regional technical assistance centers based in San Jose, Sydney, Belgium and North Carolina. Technical support for complex Cisco NZ’s products is still delivered by humans via telephone and computer. Cisco NZ’s Internet Business Solutions group based in Sydney (Australia) collaborates with their trading partners in order to design and build applications and networks that optimize the ebusiness software.
Trading Partner Satisfaction
Cisco NZ achieves 100% order accuracy by doing everything online. Errors were inevitable when Cisco NZ had human involvement in the ordering process. Trading partners (Compaq NZ) would often submit orders for products that might not work, thus causing delays and eroding customer satisfaction. Now they can go to the Web site and configure the products online and be aware of what they are ordering. If not, the system will recommend the right configuration. Asking trading partners to do their own ordering Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Success in Business-to-Business E-Commerce
161
might appear to be an erosion of service, but the opposite is true. Trading partners have full control from the moment they place an order with Cisco NZ. They can even trace order shipments online, through Web links to Cisco’s delivery partners such as Federal Express.
Best Business Practices
Cisco NZ’s best business practices included strategic marketing to gain a complete analysis and understanding their trading partners’ behavior. For example, their experience with Compaq NZ initiated with a number of negotiations which led to capacity planning that applied just enough infrastructure and policies to optimize capital expenditures. The Cisco Connection Online system consists of a highly reliable, scalable and distributed architecture, which delivers Internet audio, voice and text applications to any telephone. The Web included user-friendly browsers and open infrastructures that enabled content and application providers to develop e-commerce. Specifically, the Web gives Cisco NZ a vehicle through which trading partners can find out about products and buy, along with an automated support system that can reach a larger audience. A certification program further assures functionality of Cisco NZ technologies and full interoperability of the device in heterogeneous networks; Cisco’s network infrastructure was also implemented in order to ensure assurance. Furthermore, fraud management using real-time detailed user profiling to spot unauthorized users, non-billed use and excessive data-storage problems. Thus, effective trust and security-based mechanisms were imposed, in order to protect the confidentiality, integrity, authenticity, nonrepudiation and availability of business-to-business e-commerce transactions.
Universal Standards
Cisco NZ develops its products and solutions around widely accepted international industry standards. Cisco NZ abides by the standards and procedures from its headquarters. In some instances, technologies developed by Cisco International have become industry standards themselves. Cisco NZ describes this change as the global networked business model and refers to a global network business as an enterprise, of any size, that strategically uses information and communications to build a network of strong interactive relationships with all its key constituencies. In addition to its worldwide leadership in networking for the Internet, Cisco is a global leader and industry benchmark for Internet commerce. Hence, Cisco NZ is on the cutting edge of using networks to leverage its business relationships.
Increased Employee/Trading Partner Productivity and Satisfaction
Cisco NZ’s online systems were designed to be user friendly, easy to use and interactive. Their intranet Web-based e-commerce applications enabled more efficient processes for conducting transactions online with their trading partners. By automating many of the routine and administrative sales, order and customer service functions, employees are able to improve their productivity and focus on more challenging and rewarding interactions. Online tools allow trading partners to access support and order information online, rather than through a Cisco sales representative. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
162 Ratnasingam
Cisco NZ’s experience in e-commerce has set the standard for e-business transformation, creating Internet solution leading practices in the transformation of core processes, thus building their reputation. Cisco NZ’s established technology architecture and leadership utilizes intelligent network services to offer a complete end-to-end solution. Cisco has partnered with the best of breed application providers and system integrators (Compaq NZ), who have proven experience building e-business applications and solutions. This e-commerce pioneer can show other companies how to plan, execute, manage and improve their infrastructures for Web-based sales. The online ordering applications dramatically reduced the cost of sales, distribution, marketing and administration. Cisco Connection Online gave Cisco NZ higher gross margins and happier customers. Hence, it was confirmed by Cisco NZ participant that the success of e-commerce depends on well-planned partnerships, and the need for effective Internetworking. The teaching case developed for this study introduces some questions relating to inter-organizational relationships in business-to-business e-commerce participation.
REFERENCES
Anderson, J. C., & Narus, J. A. (1990, January). A model of distributor firm and manufacturer firm working partnerships. Journal of Marketing, 54, 42-58. Cisco. (2000). The global networked business: A model for success. Retrieved from http:/ /www.cisco.com/warp/public/756/gnb/gnb_wp.htm Cisco Fact Sheet. (2000). Cisco systems is the worldwide leader in networking for the Internet. Retrieved from http://www.cisco.com/warp/public/750/corpfact.html Cisco Newsroom. (2001). Cisco Systems Incorporation announces cost cutting measures. Measures Address Changes in the Global Economy. Cummings, L. L. P., & Bromiley. (1996). The organizational trust inventory (OTI): Development and validation. In R. M. Kramer, & T. R. Tyler (Eds.), Trust in organizations: Frontiers of theory and research (pp. 302-320). Thousand Oaks, CA: Sage Publications. Doney, P. M., & Cannon, J. P. (1997, April). An examination of the nature of trust in buyerseller relationships. Journal of Marketing, 35-51. Dwyer, R. F., Schurr, P. H., & Oh, S. (1987). Developing buyer-seller relationships. Journal of Marketing, 51, 11-27 Gabarro, J. (1987). The dynamics of taking charge. Boston: Harvard Business School Press. Ganesan, S. (1994, April). Determinants of long-term orientation in buyer-seller relationships. Journal of Marketing, 58, 1-19. Granovetter, M. (1985). Economic action and social structure: The problem of embeddedness. American Journal of Sociology, 91(3). Hart, P., & Saunders, C. (1997). Power and trust: Critical factors in the adoption and use of electronic data interchange. Organization Science, 8(1), 23-42. Info-Tech Weekly. (2000, February 14). Cisco chalks up $64m in sales a day. Info-Tech Weekly. (2000, August 23). E-business stories.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Success in Business-to-Business E-Commerce
163
Keen, P. G. W. (1999). Electronic commerce: How fast, how soon? Retrieved from http:/ /strategis.ic.gc.ca/SSG/mi06348e.html Lewicki, R. J., & Bunker, B. B. (1996). Developing and maintaining trust in work relationships. In R. M. Kramer, & T. R. Tyler (Eds.), Trust in organizations: Frontiers of theory and research (pp. 114-139). Thousand Oaks, CA: Sage Publications. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. McAllister, D. J. (1995). Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38(1), 2459. Mishra, A. K. (1996). Organizational responses to crisis: The centrality of trust. In R. M. Kramer, & T. R. Tyler (Eds.), Trust in organizations: Frontiers of theory and research (pp. 261-287). Thousand Oaks, CA: Sage Publications. Morgan, R. M., & Hunt, S. D. (1994). The commitment-trust theory of relationship marketing. Journal of Marketing, (58), 20-38. Ring, P. S., & Van de Ven, A. H. (1994). Developing processes of cooperative interorganizational relationships. Academy of Management Review, 19, 90-118. Smith, J. B., & Barclay, D. W. (1997). The effects of organizational differences and trust on the effectiveness of selling partner relationships. Journal of Marketing, 51, 321.
Pauline Ratnasingam is an assistant professor in the School of Business Administration, The University of Vermont, Burlington, Vermont. Before that she was a lecturer at the Victoria University of Wellington, New Zealand. Her PhD dissertation examined the importance of inter-organizational trust in electronic commerce participation (the extent of e-commerce adoption and integration). Her research interests include business risk management, electronic data interchange, electronic commerce, organizational behavior, inter-organizational relationships and trust. She has published several articles related to this area in conferences and refereed journals. She is an associate member of Association of Information Systems (AIS).
This case was previously published in F. Tan (Ed.), Cases on Global IT Applications and Management: Successes and Pitfalls, pp. 127-145, © 2002.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
164 Lightfoot
Chapter X
Call to Action:
Developing a Support Plan for a New Product William S. Lightfoot International University of Monaco, Monaco
EXECUTIVE SUMMARY
The CEO of a large United States-based manufacturer was angry. A service and support plan was not in place for a new line of electronic controls that was a critical part of the company’s growth plans. This product was also the first jointly developed product since a French competitor acquired the CEO’s firm three years earlier. The executive team was looking for 500% sales growth over the next five years, and had made it clear that everybody’s job was on the line if the team failed to produce. The product management and marketing team was given the specific challenge to develop a plan for service, support, and training. They had less than 90 days to review the current situation, and to then develop and begin to implement a new plan that centered on managing the flow of information between a number of key stakeholders.
ORGANIZATION BACKGROUND
The U.S. company was founded in the early 20th century and quickly became a leading supplier of electrical switches for commercial, industrial, and residential customers. It established a strong network of sales channels, which allowed the company to compete against other large industrial manufacturers of electrical distribution and industrial control equipment. By the early 1990s, revenue had reached US$1.6 billion, and the company had a leading market position for low voltage electrical distribution equipment including circuit breakers and safety switches, and a strong number two position for industrial controls. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Call to Action: Developing a Support Plan for a New Product 165
The company was less successful in selling electronic controls such as programmable controllers (PLCs) and energy management systems (EMSs). The PLC business never achieved more than a 10% market share, and the EMS business was phased out in the mid-1980s. Despite these struggles, it was clear that electronic controls were becoming more important to customers and were going to put the core business at risk. An initial analysis led the company to enter the rapidly growing market for electronic variable speed controls (typically referred to as AC Drives). A manufacturing facility in South Carolina was chosen to design and produce this new product line. By 1985, the company’s family of AC Drives was launched, and the sales force was having success selling them to the commercial construction market. Most of these applications were for air handling or pumping facilities. These applications fit the South Carolina facility’s expertise in manufacturing heavily engineered products, and complemented the company’s leading position as a supplier of electrical distribution products that were typically bought and installed by electrical contractors — the company’s primary customer for most products. All support, service, and training was managed by the AC Drives team at the South Carolina facility. By the late 1980s, the company’s AC Drives business had reached close to $12 million in sales, which represented approximately 2% market share in the U.S. To further quantify the opportunity for electronic controls in general, and AC Drives specifically, the executive staff commissioned a market research study focused on the overall sales potential for the products over the next five to 10 years. From this study, it became clear that if the company failed to invest properly in the electronic controls business, it would put its core business at risk, as customers were beginning to switch over to more electronic devices in all aspects of their operations due to the ability to improve production processes, or to reduce energy consumption. Most of the company’s development in recent years had focused on cost reduction, quality improvements, and minor innovations related to core products, which had average life cycles of more than 25 years. The consultant’s study highlighted the need for the company to invest in products that typically had three- to five-year life cycles, which meant the time to develop, launch, and replace products would be greatly compressed. This also meant that the opportunity to achieve a respectable return on investment was also compressed. Fearing an overall decline in sales and profits if they ignored the consultant’s recommendations, the executive staff decided to pull together a team of experienced AC Drives personnel who would be located in their recently relocated Industrial Controls Division headquarters in North Carolina. This now meant that the company had manufacturing, service, and project engineering in South Carolina, and development engineering, application engineering, and product management in North Carolina.
STAGNANT SALES AND DELAYED DEVELOPMENT
By the early 1990s, the new product was still in development and the company’s sales of AC Drives had stagnated at $12 million per year. The new product under development was still a long way from being commercially viable. As the economy had begun to slow down, the decision was made to cut back on the marketing efforts, which Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
166 Lightfoot
led to the redeployment of many of the personnel outside of the AC Drives group. Within a year, a French electrical giant with a strong presence globally, but only a small position in the U.S. market, acquired the company. This allowed the French firm to immediately gain access to one of the largest sales channel networks for their products in North America. The product lines they manufactured included a family of AC Drives, which had total sales of over $100 million globally per year. After several months of planning, the company and its French parent began the process of consolidating products. As the U.S. company’s new product was now over a year late in being launched; the decision was made to cancel the U.S. development project, and to create the first joint development project between the U.S. and French development teams. Once again, a new AC Drives business unit was formed that pulled together most of the internal AC Drives experts into a single team located in North Carolina. This team focussed initially on introducing a new product that the French team had developed. It quickly became a hit with the sales force, as just three months after launch, the product was selling at a rate that represented a 600% increase over a similar U.S.-designed AC Drive. Because of the rapid growth, it quickly became clear that the direct sales force and sales channels were not well prepared to support the aggressive growth the management teams were looking for. AC Drives require extensive pre- and post-sales engineering support. The typical sales office was only able to support basic questions about product features. Most customers required a higher level of support. This led to an increase in the volume of calls to the support teams in South Carolina and North Carolina. With no increase in head count, the already low response rate dropped further, which led to an increase in complaints from field sales people. In an effort to increase the level of support, the director of the business promoted one of his product managers to marketing manager, and reassigned several other people to work for him in supporting the field sales force. The new marketing manager was also assigned the responsibility for developing a product launch plan for the new, jointly developed product line, focusing primarily on product positioning, promotional tools, and pricing. It was with this in mind that the marketing manager and team set about preparing for their part of the update meeting with the CEO.
SETTING THE STAGE
When the CEO flew to his company’s Industrial Control headquarters for the update meeting, he was aware that the marketing efforts for the French-designed small drive had been very successful. He also knew that complaints were increasing from the field sales organization about the AC Drives team’s ability to support customers and channels. After two hours of listening to an update on the project, he turned to the gathering of executives and managers with a red face and a look of anger in his eyes. “Where is the service and support plan? How are you going to handle calls from our channel partners, and what are you going to do if a customer needs service? We are barely six months from a critical product launch and it is clear to me that we don’t have a plan.” All eyes turned to the director of the Electronic Controls Business who quickly turned to his recently appointed marketing manager and asked him for an update. Squirming in his chair, the marketing manager answered that they had started working on a plan and would have it ready by launch. The CEO was not happy. “I want an update in 60 days, with a detailed Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Call to Action: Developing a Support Plan for a New Product 167
plan for how we will handle both pre- and post-sales support.” The meeting adjourned. Feeling his fast-track career was on the brink of falling apart, the marketing manager quickly met with the rest of the Electronic Controls Business team to review the meeting and to make some quick decisions about how to pull together the plan the CEO wanted in such a short period of time.
CASE DESCRIPTION
At the time of the meeting with the CEO, the AC Drives team was busy focusing on preparing for the impending product launch. The product specification called for a smaller, less costly, and more sophisticated AC Drive than any of the leading products on the market. The feeling was that if the launch date held, the company would have a 12- to 18-month product advantage over all its competitors, which they hoped would position them as one of the top five competitors in the U.S., out of an estimated 100 competitors (Figure 1). In the longer term, both the U.S. and French executives were hoping a successful launch would enable the company to become one of the top three competitors in the U.S., with close to 20% market share. Thus the meeting with the CEO was considered an important benchmark for determining the level of support they could expect as the team looked to grow the business. Instead, the meeting became a call for action, exposing some major issues that if unresolved, would lead to further restructuring, reductions in support, and the end of several careers within the company.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
At the time of the meeting, a field sales person had several choices on where they could turn for AC Drives technical support. As a software-controlled, power electronic device, there were often mathematical calculations that needed to be performed, as well as a review of the environment, prior to bidding a project, to ensure that the appropriate AC Drive was selected. Most of the company’s sales people and channel partners were
Figure 1. U.S. growth plans
SALES
$1,500,000,000
8.00% 6.00%
$1,000,000,000
4.00%
$500,000,000
2.00%
$-
0.00% 1
2
3
4
YEAR
5
6
% SHARE
Expected Market Growth
Market Size Company Sales % Market Share
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
168 Lightfoot
Figure 2. Organizational chart showing reporting relationships CEO/ President
VP Marketing and Sales
VP Operations
VP Controls Plant Mgr. S. Carolina Dir. Electronic Controls
Engineering Manager
Product Manager
Product Manager
Marketing Manager
Project Manager
Application Manager
not well trained to sell these products, and therefore had to depend on the company’s manufacturing and business unit teams for support. The problem was that there were at least two possible locations for pre-sale support — South Carolina and North Carolina. Even though the recent restructuring meant all of the pre-sales support personnel all reported to the same director, there was not a consistent, single point of contact. For postsales start-up and trouble-shooting assistance, it was even worse as the field could call South Carolina, North Carolina, or the field service organization, which did not have a call center and had only a few service people that could provide assistance on AC Drives. The marketing manager knew that support was an Achilles’ heel for the organization. Because of the many organizational changes over the past five years, the facilities in South Carolina and North Carolina had different approaches to handling information. The application team in South Carolina had created its own database that was maintained by one of the application engineers and updated periodically (i.e., whenever he had the time). The team in North Carolina did not have a centralized database; instead, the members depended on the design of their workspace to encourage communications and collaboration. Semi-weekly discussions were held to discuss specific problems, and to deliver training that would help the team improve its performance. By the time of this case, the two teams operated on a day-to-day basis as two separate entities — location often determined how decisions were made, with the South Carolina team often making decisions based on the operational needs of the plant, and the North Carolina team making decisions based on the revenue needs of the marketing division to which they were attached. On top of this, senior management had begun to experiment with matrixed reporting relationships (Figure 2). Effectively, the application manager in South Carolina formally reported to the director of the business in North Carolina, but was matrixed to the higher ranking and more politically connected plant manager in the South Carolina plant. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Call to Action: Developing a Support Plan for a New Product 169
To further complicate things, the information technology team was primarily focused on developing a comprehensive system (called Quote to Cash) that managed the order flow from quotation, to final payment. Most of their resources were tied up in this enormous task. Local IT support was largely limited to low-level personnel with minimal experience. And since budgets were tight and focused on helping “get business”, most managers would not seek outside support. Given the charge to come up with an action plan in 60 days, the marketing manager immediately met with the director, the rest of the Electronic Controls team, and the vice presidents of the Industrial Controls Division to determine the current state, and also to seek support and guidance for accomplishing the task. The first thing the VPs did was to get a 30-day extension, so now the team had 90 days to pull a plan together. Next, the marketing manager delegated day-to-day support for the marketing to three of his direct reports. He would be available as needed for key account activity, and for other major initiatives, but he needed to focus on developing the support plan as laid out by the CEO. The next step was to pull together a cross-functional team of front-line employees who had the experience necessary to assess the current state. Using process mapping skills developed through a major total quality management initiative, the team was able to determine the primary issues that prevented sales people, channel partners, and ultimately customers from getting the support they needed, when they needed it. In addition to confirming the inconsistent support from multiple locations, the current state revealed in part that all engineered orders, pre-sale application questions, product questions, and many marketing and post-sales service questions were being directed through the South Carolina team. This largely had to do with the fact that for close to 10 years, this team had supported all AC Drive activity. The North Carolina team had been restructured several times, so that the sales people did not know who to call there. Because the South Carolina team had lost head count over the years to support the changes in North Carolina, their responsiveness had decreased. For complex questions or decisions, all calls were directed through one person — the manager of the group, a technically astute, opinionated, and hard nosed employee who had been with the company for nearly 25 years. The manager also reviewed all of the output from most of his team. With the exception of one person, his team was made up of recent engineering graduates from local universities. Once the flow of inbound calls was mapped, it became clear that a primary bottleneck was the group manager in South Carolina (Figure 3). The team in South Carolina had been the only stable organization since the acquisition by the French, and as it was located in the same facility as where the company’s drives were manufactured, it was logical that the sales force depended on them. But as the call volume increased, their ability to respond to the sales force decreased. When the current state analysis was performed, the team was answering less than 25% of the calls that they received each month. On top of this, all calls were logged manually based on whoever answered the phone or checked the voicemail. This person then decided on the priority of the call and the action that he or she would take as a result. Finally, the telephone system was integrated with manufacturing and had no additional capacity to support the increasing demand. After further analysis by the IT department, it was determined that the systems in North and South Carolina could not be linked together easily, or economically. It was becoming clear what the major issues were. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
170 Lightfoot
Figure 3. Current state of sales support (abridged) South Carolina
Application Manager
Correct?
YES
• Quotation • Scheduling production • Report • Field service
NO
Sales Inquiry
• Application Review • Order Entry Assistance • Software Study • Technical Support
Further Processing
South Carolina Central database; all responses reviewed by Application Manager
Support Database (South Carolina)
1200 Calls/Month
No central datbase; response direct; no review
North Carolina • • • •
North Carolina
Marketing Support Negotiation Assistance Product Support
1. Many Phone Numbers 2. Individual Database 3. Lack of Clarity in Responsibility 4. Multiple Locations 5. 2400 Calls per Month/~60% Response Rate
Figure 4. Current and expected call volume
Planned Revenue Total Call Volume Live Body Answer Rate Total Response Rate Calls Responded to per Month
$
Business Plan Current 2 Years 15,600,000 $ 30,000,000 $ 2,500 4,000 25% 83% 60% 98% 1,500 3,920
5 Years 72,000,000 6,000 95% 99% 5,940
Once the North Carolina team had mapped out the current/original state, they then began to determine the current call volumes into all AC Drives personnel regardless of location, and they estimated the volumes they would need to support a business that was five to six times the size in sales volume (Figure 4). Including all of the products manufactured by the company and the French parent, the total sales for the AC Drives business had reached $15.6 million. Plans called for doubling the sales volume within the next two years, with further ambitious growth planned over the next five years. The Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Call to Action: Developing a Support Plan for a New Product 171
executive staff believed that the main bottleneck impeding growth lay with the service and support. After reviewing the current state at their weekly review meeting, the marketing manager turned to his team and said, “What do we do next? We have less than 45 days to develop and begin the implementation of a new support plan. Our general manager has just told me he committed us to an April 1 launch date. The review with the CEO is in two weeks. It is now February 15. What are we going to do?”
THE PROPOSAL
Because of the urgency, the focus was on implementation. An idea had been brewing in the marketing manager’s mind for several weeks about how to pull the plan and the team together. It was related to the Quote to Cash (“Q2C”) initiative that the CEO had long championed, which intended to streamline support for the sales organization through the use of an integrated information system. The main focus of Q2C was on developing a system that would enable the organization to capture and guide information, seamlessly using one integrated system. The system would allow the sales force to quote customers, place orders, communicate with the manufacturing facilities, track shipments, and close out the billing, which would ultimately reduce the time it took for customers to pay for the products they received. The initial analysis of the AC Drives support efforts had shown significant flaws in the way in which information was processed, and ultimately in the telecommunications and organizational infrastructure. Whatever the team recommended, if it was going to be enthusiastically supported by the CEO, it had to be seamless, logical, and responsive. Furthermore, it had to be a vision that all of the AC Drives team members could support. Thus, Quote to Quote (“Q2Q”) was born.
THE Q2Q PROCESS
The focus of Q2Q was on supporting customers for life — from initial quote, to order entry, to fulfillment, payment, and ultimately to the next quote. This vision supported an ongoing relationship with customers as it emphasized the support provided by the AC Drives team as a critical part in developing and maintaining customer relationships. To offer a high level of support, it was determined that there were three critical requirements that had to be put in place. First, the AC Drives team needed to operate as a single support organization, offering pre- and post-sales support, available via one phone call. Second, the technical service team needed to become an integrated member of the support network so that information and responsibility could be transferred seamlessly between it and the AC Drives team. This meant developing an information system that enabled linking all critical support personnel seamlessly. Third, the direct sales force and channel partners needed to be better educated so that they could serve as the first line of support on AC Drives. All personnel in the field sales offices and at channel partners involved in selling AC Drives would need to be trained. To accomplish the first step of integrating the drives team, the desired state was mapped out (Figure 5) by team members in South Carolina, North Carolina, and the field sales and service organizations. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
172 Lightfoot
Figure 5. Desired state Sales Inquiry
4400 Calls/Month
Q2Q Team North Carolina • Application Review • Order Entry Assistance • Software Study • Technical Support • Marketing Support • Negotiation Assistance • Product Support
• PSG (Product Support Group) • ASG (Application Support Group) • MSG (Marketing Support Group)
Further Processing • Quotation • Scheduling Production • Report • Field Service
Support Database (Q2Q)
North Carolina 1. 1 Phone Number 2. Centralized Database 3. Clear Responsibility 4. 1 Location 5. 4400 Calls per Month/~80% Response Rate
The process of creating the desired state clarified the need for more support personnel. It also identified the different types of calls, and how they should be routed. Finally, the need for a centralized database that would capture all contact information was also identified. From this analysis, a new organization called the Marketing Support Team was created with three groups focused on different parts of the support process. The first group, called the Product Support Group (PSG), was considered the front-line support team that answered the majority of the incoming phone calls for both pre- and post-sales support. It was determined that a total of six full-time support people would be required, including at least three new personnel. The second group (named the Application Support Group or ASG) was to be made up of the team from South Carolina. A review of their activity revealed that they were spending a lot of time clarifying orders (Figure 6). Clarifying orders was part of the manufacturing group’s responsibility, but this team (the ASG) had supported it for almost a decade. Because more pre-sales support was needed, the ASG transferred one team member to manufacturing along with the order clarification responsibility. By freeing the ASG team up from the order clarification responsibility, team members could focus on other pre-sales requests, which would lead to an increase in responsiveness to the sales force, and ultimately, an increase in sales. It was also determined that the ASG should reside in North Carolina as a part of the overall support team, since they were often working with the same sales people, and customers at various stages in the Q2Q process, and would possess critical information necessary to maintain a high level of support. The final group was called the Marketing Support Group (MSG). This group had a critical role in launching new products, managing product promotions, supporting key accounts, and providing customer input into the marketing, engineering and operational processes. The plan was to use members of the current product marketing team, as they had a good blend of technical knowledge and experience in working with the sales organization. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Call to Action: Developing a Support Plan for a New Product 173
Figure 6. Analysis of activity Analysis of Support - South Carolina Team Based on 240 man hours per week. 50 45 40 35
Percent
30 25 20 15 10 5 0 Order Application Clarification Review
Quotation
Negotiation
Harmonic Study
Service assistance
Sales call
Energy Savings
Other
Major Categories
The desired state analysis also identified additional areas for training, and highlighted the need for new software tools that would enable the field sales force to perform some of the functions this team supported. For example, the Harmonic Study’s and Energy Savings Calculations, while a relatively small portion of the support team’s daily activity, was considered a critical and valuable tool that helped salespeople close orders. For a $50,000 investment, a software tool was developed that enabled the field sales force to perform most of the calculations themselves. This resulted in more widespread use of these tools, and was directly responsible for close to $300,000 worth of business within the first three months of its release to the field sales force. The second key element was to establish a way to handle post-sales technical service. This could include installing, starting up, troubleshooting, and repairing AC Drives. There were two potential organizations within the company that could fulfill this role. The first was the existing technical service division. Because the drives business had never been very large, and therefore the overall demand for service small, only five field service representatives had been trained on servicing them. These five individuals were also expected to service other electronic switches, which meant that they were often unavailable when a drive service call came in, leading to further responsiveness issues. The second internal service group was part of the uninterruptible power supply (“UPS”) business on the west coast. This team worked on devices that were technologically similar to an AC Drive. This service team was also proactive in working with customers and the sales force on service-related issues. After visits with both groups a few weeks after the meeting with the CEO, it was decided to use the group from the UPS division. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
174 Lightfoot
Their leader was enthusiastic about the opportunity and was eager for his organization to become better integrated with the rest of the company. Two AC Drive team members were immediately assigned to work with the service team developing the plan for postsales service. Their charge was to develop a plan that was integrated with the marketing support team in North Carolina seamlessly — which meant developing an integrated information and telecommunication system, with clear processes and procedures. The service team also assigned several key personnel to work with the Q2Q team from North Carolina. The third part of the plan involved training the direct sales force. Working with the manager of training for the Industrial Controls Division in North Carolina, the roles and responsibilities of the people involved in the pre- and post-sales support process were used to identify the critical knowledge that they required to support their customer base. From this analysis, a curriculum was developed that involved seven different types of training and three levels of knowledge. The training department then put together the curriculum and training schedule for the next two years. Alternative delivery approaches, including the use of Web-based training, teleconferencing, and audio and videotapes, were all considered when developing the overall training plan.
THE FOLLOW-UP MEETING
The Q2Q team had worked hard to develop a detailed plan. All processes were mapped, and all procedures written for each role. By the time of the meeting, steps had already been taken to identify a new telecommunications and information system, job descriptions had been rewritten, new measurements identified, and a new organizational structure put in place. All that was needed was the blessing of the CEO to move forward on full implementation of the plan. The marketing manager entered the meeting confident, but nervous, as the CEO was known to have a quick temper. The marketing manager started the meeting by reviewing the current status of the business, then discussing the Q2C program, and finally, introducing the Q2Q vision of the AC Drives team. The CEO seemed interested, but simply nodded as the plan was rolled out. For the next two hours, the entire service and support process was mapped out, indicating the key measurements and showing how these metrics tied to the AC Drives, Industrial Control Division, and corporate goals. A core part of the plan involved the restructuring and relocation of the AC Drives team to one central location in North Carolina. Included in the budget were moving costs, plus a breakdown of the changes in headcount for all affected organizations and the overall impact on the AC Drives budget for the coming year. After two hours, during which the CEO remained quiet, the marketing manager wrapped up: “I am confident that with your support of this plan, we can exceed our growth plans over the next three years.” He held his breath. Whatever future he had with the company, it would be defined by the response the CEO gave at this moment.
William S. Lightfoot, PhD, is a professor at the International University of Monaco, and the director of the Monaco MBA. He has more than 15 years of experience in marketing and management with global corporations as well as small organizations. He has Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Call to Action: Developing a Support Plan for a New Product 175
taught in the U.S., Canada, and Europe, and his research interests include global tactical leadership, disintermediation, and the point of contact between buyers and sellers.
This case was previously published in the Annals of Cases on Information Technology, Volume 6/2004, pp. 406-417, © 2004. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
176
Christiaanse & Damsgaard
Chapter XI
Success and Failure in Building Electronic Infrastructures in the Air Cargo Industry: A Comparison of The Netherlands and Hong Kong SAR Ellen Christiaanse University of Amsterdam, The Netherlands Jan Damsgaard Aalborg University, Denmark
EXECUTIVE SUMMARY
Reasons behind the failure and success of large-scale information systems projects continue to puzzle everyone involved in the design and implementation of IT. In particular in the airline industry very successful (passenger reservation) systems have been built which have totally changed the competitive arena of the industry. On the cargo side, however, attempts to implement large-scale community systems have largely failed across the globe. Air cargo parties are becoming increasingly aware of the importance of IT and they understand the value that IOS could provide for the total value chain performance. However, whereas in other sectors IOSs have been very successful, there are only fragmented examples of successful global systems in the air Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Building Electronic Infrastructures in the Air Cargo Industry 177
cargo community, and the penetration of IOS in the air cargo industry is by no means pervasive. This case describes the genesis and evolution of two IOSs in the air cargo community and identifies plausible explanations that lead one to be a success and one to be a failure. The two examples are drawn from Europe and from Hong Kong SAR. The case clearly demonstrates that it was the complex, institutional and technical choices made by the initiators of the system in terms of their competitive implications that were the main causes for the systems’ fate. The case thus concludes that it was the institutional factors involved in the relationships of the stakeholders that led to the opposite manifestations of the two initiatives, and that such factors should be taken into account when designing and implementing large-scale information systems.
BACKGROUND
Time is the single most important factor in an industry where the distribution of goods moves close to the speed of sound. In the mid-1990s the average shipment time for airfreight was six days. Ninety percent was spent on the ground “waiting” for transport. The need to coordinate and optimize all the ground-based activities in the air cargo community was clear. Based on weight, air cargo only accounted for 1% of total general cargo transport. However, based on the market value of goods, the share amounted to approximately 25%. Of the total US$200 billion in world-scheduled airline operating revenues, the air cargo industry represented a relatively small share at around US$30 billion. Just as in other sectors, there was a growing interest in IT in the air cargo community. While most in-house functions had become IT supported and re-engineered in the 1980s and in the early 1990s, the air cargo community was looking beyond organizational boundaries to identify further improvements. Air cargo parties were becoming increasingly aware of the importance of inter-organizational information systems (IOS) and, increasingly, they understood the value that IOS could provide for the total value chain performance. In the 1990s, many industries had undergone dramatic changes as a result of IT both within organizations and across. However, whereas in other sectors IOS had scored big successes, there were no real signs of deep penetration of IOS in the air cargo community. Although a large number of attempts had been made to automate air cargo processes across stakeholders, there was still no dominant or widely accepted design of an IOS for the air cargo industry that also would satisfy and align the varying demands of the parties involved in air cargo processes.
SETTING THE STAGE
As early as 1975, the International Air Transport Association (IATA1 ) concluded that for 78% of its total travel time, air cargo was at the airport “waiting” for transport and there were no clear signs that there had been much improvement since. According to IATA, this inefficiency was caused mainly by the lack of communication and integration of administrative processes on ground. It was expected that technological innovation such as the development pre-defined open document standards would reduce the waiting time and speed up the time-consuming and error-prone processes such as manual Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
178
Christiaanse & Damsgaard
data-entry and re-keying of information. It was also expected that coupling cargo and accounting IS would speed up billing processes, the time to check space availability, bookings and also the reporting procedures. The following provides more insight into the nature of the air cargo business and the important information flows in this business network. The black arrows in Figure 1 refer to the physical movement of cargo between the parties in this network while the dotted arrows refer to the information flows between members in this network. It should be clear that the movement of cargo is of a sequential nature and that the information flows can be done in parallel. An example might be the clearing of goods at customs, often cited as a bottleneck by air cargo freight forwarders. The administrative information flows related to customs do not necessarily have to take place in parallel with the physical movement of cargo.
CASE DESCRIPTION
This section provides two tales of electronic infrastructure development. One tale from The Netherlands and one from Hong Kong, both were major hubs in international trade and transportation, and had been important hubs for centuries. See the Appendix for tonnage handled in Hong Kong and at Schiphol airport. At the beginning of the new century, Hong Kong Special Administrative Region of the People’s Republic of China was one of the four largest financial centers of the world, it had one of the three largest seaports in the world, and in 1996 Hong Kong’s international airport, Kai Tak, overtook Tokyo’s Narita airport in terms of international air cargo and became the world’s largest. The throughput of Kai Tak was even surpassing its nominated cargo capacity of 1.5 million tons, which emphasized the urgent need for the new airport that was opened at Chek Lap Kok in 1998. The new airport was designed to be capable of handling three million tons of cargo a year, which was expected to be sufficient well into the new century. While Hong Kong switched from being one of the last remaining British colonies, The Netherlands had established themselves with one of the three largest seaports in the world and served as an important distribution center for cargo into Europe. The relatively close proximity of Rotterdam as a harbor and Amsterdam Schiphol as the airport connected by excellent infrastructure comprised the backbone of The Netherlands’ status as Europe’s main distribution center. In addition, Schiphol airport was a main hub for passenger travel to and from Europe. In 1999 the increase in passengers and aircraft movements to Amsterdam Schiphol airport was less than in the preceding years. With growth of 6.6% to 36,772,015 passengers, passenger traffic maintained its market position inside Europe. Freight tonnage remained at virtually the same volume for the second year in succession. The negligible growth of 0.8% was clearly lower than for other European airports. In the year 1999 traffic growth was also affected by the capacity ceiling imposed by the noise abatement measures and the resulting slot allotment system for the airport. Schiphol achieved a freight volume of 1,180,717 tons, significantly less than expected. Its main competitors Frankfurt, London and Paris amounted to 5.7%.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Building Electronic Infrastructures in the Air Cargo Industry 179
Figure 1. Information flows in the traditional intercontinental air cargo chain (adapted from Zijp, 1995) 1 8 2 5 4
11
Road Transporter
Shipper 6
Forwarder 6
Agent 7
10
13
Airline Company
9
Agent 7
12
Customs
Road Transporter
Forwarder 7
14
16
Consignee 16
15
Customs
3
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Consignee places an order with the shipper and he confirms receipt of the order; Shipper places a transport order with the forwarder and the confirms receipt of the order; Shipper passes on shipping instructions to the forwarder; Forwarder reserves and books freight capacity with the road transporter and he confirms the reservation and booking; Forwarder reserves and books freight capacity with the airline company and he confirms the reservation and booking; Forwarder makes up the bill of lading for road transporter and this document goes with the freight during the road transport; Forwarder makes up an Air Waybill and this document goes with the air freight from one airport to the other; Forwarder gives an assignment to the forwarder at the airport of destination, to reserve and book freight capacity with the road transporter and he confirms receipt of this assignment; Foreign forwarder reserves and books freight capacity at the road transporter and he confirms the reservation and booking; Forwarder supplies information about the air freight sending with the customs and the customs provides the forwarder with the necessary documents; Airline company provides the agent with a booking list for a specific flight; Agent gives information about the load of a specific flight to the customs and the customs gives confirmation to the agent; Airline company provides agent with details about the load of a specific flight at the airport of destination; Agent at the airport of destination gives details about the load of a specific flight to the customs and the customs gives confirmation back; Forwarder at the airport of destination provides the customs there with details about the load and gets information about this from the customs in return; Forwarder at the airport of destination makes up a bill of lading for road transporter and this document goes with the freight during road transport.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
180
Christiaanse & Damsgaard
Case 1: Hong Kong: The Traxon Initiative
Four international airlines envisioned that electronic means of coordinating cargorelated information was the key that could provide for more reliable, accurate and timely exchange of information, and eventually a smoother exchange of data that would speed up the processes in the entire air cargo industry. They took the initiative to form an international electronic network for coordinating transactions between freight forwarders and airlines. They set up three companies; Traxon Asia Ltd., Traxon Europe Ltd. and Traxon World Wide Ltd. Traxon Europe was mainly run by Air France and Lufthansa and Traxon World Wide. Traxon Asia was run by Cathay Pacific, Air Japan and Traxon World Wide. Traxon World Wide played a minor role its main function was to provide coordination between the two regional companies and the four founding airlines. The content of the Traxon system consisted of three parts: (1) scrolling of flight schedules, (2) making bookings, and (3) status checking for shipments in transit. This seemingly limited functionality, however, provided significant benefits for both airlines and freight forwarders. The airlines benefited from Traxon by getting more detailed information about bookings and an increased number of bookings. They also believed that the use of Traxon reduced the amount of lost business due to busy telephones or absent (busy) sales personnel. The airlines were provided with better information about each booking because the Traxon system reduced the number of errors in the bookings by avoiding manual typing. The freight forwarders improved their efficiency using Traxon because they could find available space with different airlines, make bookings electronically, and therefore they could calculate and quote prices faster and more efficiently. Before Traxon, freight forwarders had to use the phone and call several airlines to find available space. Instead, by using Traxon, they could perform these activities simultaneously. Traxon also allowed freight forwarders to monitor cargo from the air cargo terminal in Hong Kong until it reached its destination overseas. Traxon was a powerful search, monitoring and booking tool that was carefully designed to meet the requirements of a small niche of the air cargo chain. As such Traxon only supported numbers 5, 7, 10 and 15 of the information flows in Figure 1. It was deliberately designed not to change but to maintain and enforce existing relationships and business structures. The Traxon system consequently did not carry any information about prices or discounts. This left the market opaque for outsiders and preserved the roles and power balance between the airlines and the freight forwarders.
The Implementation Process and Reasons for Its Success
The dilemma for Traxon’s system designer was that the airlines would adopt the system insofar as a majority of the forwarders did. At the same time the freight forwarders would only adopt if most of the airlines did. How to get this spiral of self-enforcement going in favor of Traxon was the major challenge. Thus the systems designers knew that it was essential that all parties would see the benefits of the arrangement, that is, decide to participate. Traxon was therefore designed to accommodate the needs of the airlines and forwarders, but also to carefully preserve the sensitive distribution of power between them. It was therefore decided not to extend Traxon to any party beyond freight forwarders (for example to shippers or consignees). Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Building Electronic Infrastructures in the Air Cargo Industry 181
The Traxon system consequently did not carry any information about prices or discounts. This left the market opaque for outsiders and preserved the roles and power balance between the airlines, the freight forwarders and the shippers. Another key factor for Traxon’s success was that the implementation process took advantage of the respective airlines’ local strong holds. So in Hong Kong, the local airline Cathay Pacific was in charge of the local roll out, and similarly in Japan it was the local Japan airlines. A similar approach was applied in Europe and later on in Korea. Furthermore, each local Traxon system had the other shareholder airlines as initial customers, which constituted a significant share of the air cargo market. After its first years of operation, Traxon was able to enlarge and sustain its position as the dominant electronic trading network provider in Hong Kong’s air cargo community. As of January 1998 there were 187 freight forwarding agents connected to the system, resulting in more than 8.8 million messages per year (1997). A number of airlines gave up their defensive actions and they had now joined, which essentially gave Traxon a defacto monopoly in the air freight community in the Hong Kong hub.
Case 2: The Schiphol/Dutch Situation: The Reuters Initiative
In 1992, Reuters, the worldwide press agency and supplier of business information services, started developing an electronic information system on behalf of parties in the spot-markets for air cargo space. The so-called Reuters Initiative was an international initiative with terminals in Amsterdam, London, Paris and Frankfurt. The traditional air cargo community consisted of three broad functional domains: airlines (passengers/ cargo carriers as well as cargo-only carriers), ground transport companies (truck and rail transport companies) and freight forwarders that coordinated the door-to-airport and airport-to-door activities at each end. The content of the Reuters system consisted of three parts: (1) scrolling news page consisting of general and specific air cargo news that can influence the market of air cargo space; (2) summary of all available business information, such as oil prices and exchange rates — this information was helpful in increasing the insight in the influence of these factors; (3) summary of indicative price quotes typed in — these quotes showed to which degree sellers would like to buy or sell space. By changing the indicative prices, contributors could give signals to other parties involved. In this way prices were assigned the function of information carrier. The three parts together provided a complete overview of the spot-markets for air cargo space. Changes could soon be made visible and in this way it was expected that the parties could react to these changes more quickly. Essentially the Reuters’ system was designed to support all the information flows in Figure 1.
The Implementation Process and Reasons for Its Failure
The information system ran on trial at the airport Schiphol2 from August 1993 to January 1994. After that — due to a lack of participation by key parties in the industry — the system was abandoned. The interactions of customers, forwarders, integrators and carriers were based on the distinction of the two major activities in the business: transport space and shipping services. The market was structured along the lines of the so-called “space-capacity” Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
182
Christiaanse & Damsgaard
principle, meaning that carriers provided space and forwarders and integrators provided services to fill capacity. This principle, however, was increasingly contested among the main parties in the market. The airlines were of the opinion that they offered not only space, but also services. The forwarders and integrators, however, maintained the strict distinction of activities in the market between airlines and themselves. Both parties, however, agreed that the market itself was not (yet) a commodity market in which competition is conducted mainly on price. In contrast, the initiators of the Reuters concept claimed that the market for air cargo space could be separated from the market for air cargo service. They claimed that the market for air cargo space had developed itself from a differentiated market to a commodity market. This could be incorporated in the design of the system if: • • •
It could only contain price information and no product or service information; It treated the market as a commodity market where parties could only compete on price instead of in service attributes; In the system the sellers (the airlines and the integrators) were explicitly considered sellers of air cargo space.
All parties involved were reluctant to participate in the system. At first the forwarders had serious doubts, and in the end the support for the system was totally withdrawn. Some of the reasons were: fear for the elimination of the forwarder, fear for decreasing profit margins due to increasing transparency of the market and a general negative attitude towards electronic business. The reaction of the carriers was not positive either. The air cargo market was thus regarded as a commodity market by the Reuters Initiative and thwarted the space-capacity coordination mechanism used by the parties involved. The system did not take these issues into account during the design of the system and that was an important reason for the parties not to participate. As a result of the conflicting interests, the system was abandoned. It was concluded that there is no viability for the system if the parties concerned refused to cooperate.
DISCUSSION
The air cargo market was characterized by a high degree of intransparency, which created substantial market inefficiencies. However, these market intransparencies were in the interests of some of the parties in this market place. Forwarders in particular derive their main reason for their very existence from this lack of transparency. The forwarders acted as brokers and made their living from coordinating the market, and consequently the forwarders had a far more extensive knowledge of the distribution processes than shippers would have. This information asymmetry was clearly in favor of the forwarders and to the disadvantage of shippers. Usually electronic markets favor the buyers and reduce sellers’ profits and market power. It is therefore clear that sellers would want to stay away from any system that emphasizes price information. This was not recognized in the Reuters Initiative, whereas in the case of the Traxon system, it was carefully designed to preserve the secrecy of the price-setting process, and therefore Traxon was able to attract the critical number of forwarders to the system whereas Reuters was not. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Building Electronic Infrastructures in the Air Cargo Industry 183
CURRENT CHALLENGES
What is unique for the diffusion and adoption of these kinds of IOS is the combined power of users. If they decide not to join a network, it is devastating for the IOS, as the Reuters Initiative clearly demonstrates. Attracting users is therefore a key requirement for success. For each new individual user, her decision to adopt an IOS creates positive externalities for the other users, because the usability of this IOS increases dramatically with the number of adopters. However this also means, in contrast to many other technologies, that the benefits of being an early adopter can be relatively low compared to be being a “laggard”. This is especially true when there are a number of competing and incompatible alternatives present in the market. Thus potential participants of an IOS can effectively block its establishment by simply not adopting the technology. If the number of adopters reaches a critical mass of users, the diffusion process will self-evolve until saturation is reached and a monopoly is created. Established monopolies are hard to challenge and dissolve, and therefore Traxon had a strong position in Hong Kong. This effect is clear in the two cases. Reuters started with only pre-trade information and no users. The forwarders felt threatened and decided not to adopt which essentially made the system “useless.” The Traxon system was owned by four airlines, which also were initial customers of the system, and therefore, Traxon had a substantial share of the market on the supply side to begin with. The forwarders soon followed once they noticed that their position was protected, the system useful, and that a growing number of airlines and fellow forwarders (competitors) were joining the system. Furthermore the Traxon system was first to market, which meant that it did not have to replace any existing and well-established system. Replacing an institutionalized system can be quite a challenge, as the battles between airline passenger CRSs in the U.S. demonstrate. However, established monopolies may be short lived. At the beginning of the new century, technological innovations on the Internet and their fast adoptions were rearranging the provision of IOS in many industries. They also had a great impact on the international air cargo community. For the IOS owners the change was even more radical because the Internet was eroding their business foundation. The Internet was replacing the service providers as primary means of carrying electronic messages, and innovations in WWW technology were challenging the systems that the IOS providers were offering. The user-friendly interface and the low cost of access to the Internet were also opening the gates to Internet-based IOS for a number of players that earlier could not afford and/ or lacked the skills to operate proprietary IOS. The advantage of having one IOS that interconnected all players in an industry segment, as the Traxon system, was being eroded since most players could build and offer their own services on the Internet and most players could access the system over the Internet. An example was non-share holder airlines that earlier were subject to Traxon’s de facto monopoly, tempted to launch their own air cargo service on the WWW and reach just as many forwarders. Traxon’s future was still uncertain; it had a strong base but was the technological innovation in favor of Traxon’s dominant position or did it jeopardize Traxon’s attractive position? What necessary steps should Traxon take to maintain its position, and would it be allowed to embark on such a journey by its owners?
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
184
Christiaanse & Damsgaard
ACKNOWLEDGMENTS
The authors acknowledge the case contributions of Tonja van Diepen and John Been to the Reuters case. In addition we would also like to thank the many industry participants in Hong Kong and The Netherlands for their time and openness during interviews. This research was in part supported by the PITNIT project (grant number 9900102, the Danish Research Agency) and the Primavera Research Program (http:// primavera.fee.uva.nl/) of the University of Amsterdam.
FURTHER READING
Bakos, J. Y. (1991) Information links and electronic marketplaces: The role of interorganizational information systems in vertical markets. Journal of Management Information Systems, 8(2), 31-52. Besen, S. M., & Farrell, J. (1994). Choosing how to compete: Strategies and tactics in standardization. Journal of Economic Perspectives, 8(2), 117-131. Christiaanse, E., & Huigen, J. (1995). Institutional dimensions in information technology implementation in complex network settings. European Journal of Information Systems, 6, 77-85. Christiaanse, E., & Kumar, K. (2000). ICT enabled coordination of dynamic supply webs. International Journal of Physical Distribution and Logistics Management, 30(34), 268-285. Christiaanse, E., & Zimmerman, R. J. (1999). Electronic channels: The KLM cargo cyberpets case. Journal of Information Technology, (14), 123-135. Clemons E. K., & Row M. (1991). Information technology at Rosenbluth travel: Competitive advantage in a rapidly growing global service company. Journal of Management Information Systems, 8, 2. Copeland, D., & Mckenney, J. (1988, September). Airline reservation systems: Lessons from history. MIS Quarterly, 353-370 Damsgaard, J. (1998). Electronic markets in Hong Kong’s air cargo community. In B. F. Schmid, D. Selz, & R. Sing (Eds.), EM-International Journal of Electronic Markets, 8(3), 46-49 Damsgaard, J., & Lyytinen, K. (1998). Contours of electronic data interchange in Finland: Overcoming technological barriers and collaborating to make it happen. The Journal of Strategic Information Systems, 7, 275-297. Damsgaard, J., & Lyytinen, K. (2001). Building electronic trading infrastructure: A private or public responsibility. Journal of Organizational Computing and Electronic Commerce, 11(2), 131-151. Huigen, J. (1993). Information and communication technology in the context of policy networks. Technology in Society, 15, 327-338. Katz, M. L., & Shapiro, C. (1994). Systems competition and network effects. Journal of Economic Perspectives, 8(2), 93-115. Markus, M. L. (1983). Power, politics and MIS implementation. Communications of the ACM, 26, 430-444.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Building Electronic Infrastructures in the Air Cargo Industry 185
McCarthy, D. (1986). Airfreight forwarders. Transportation Quarterly, 97-108. Robey, D., Smith, L. A., & Vijayasarathy, L. R. (1993). Perceptions of conflict and success in information systems development projects. Journal of Management Information Systems, 10(1), 123-139. Shaw, S. (1985). Airline marketing and management. London: Pittman. Short, J. E., & Venkatraman N. (1992). Beyond business process redesign: Redefining Baxter’s business network. Sloan Management Review, 7-21. McKenney, J. (1995). Waves of change. Boston: Harvard Business School Press. Oliva, T. A. (1994). Technological choice under conditions of changing network externality. The Journal of High Technology Management Research, 5(2), 279-298. Wrigley, C. D., Wagenaar, R. W., & Clarke, R. A. (1994). Electronic data interchange in international trade: Frameworks for the strategic analysis of ocean port communities. Journal of Strategic Information Systems, 3(3), 211-234. Zaheer, A., & Venkatraman, N. (1994). Determinants of electronic integration in the insurance industry: An empirical test. Management Science, 40(5), 549-567.
ENDNOTES
1 2 3
http://www.iata.com http://www.schiphol.nl Figures are from Schiphol statistical annual review 1999 (http://www.schiphol.nl/ engine/images/stat99.pdf).
Appendix. Million Tons Cargo Handled in Hong Kong and Schiphol Airport
2000 1999 1998 1997 1996 1995 1994 1993
Kai Tak/Chek Lap Kok Airport
Schiphol Airport3
1.50 1.59 1.58 1.69 1.56 1.46 1.29 1.14
1.22 1.18 1.17 1.16 1.08 0.98 0.84 0.78
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
186
Christiaanse & Damsgaard
Ellen Christiaanse is an associate professor of e-business at the University of Amsterdam. Her major fields of interest include the impact and optimization of electronic delivery channels, supply chains and dot-com start-ups. She has been awarded several international prizes for her academic work, which was presented at international conferences (ICIS, ECIS, HICSS, Academy of Management) and published in international journals (Journal Information Technology, International Journal of Physical Distribution Systems and Logistics Management, European Journal of IS). She spent almost four years at the MIT Sloan School of Management as a visiting scholar. Dr. Christiaanse has a master’s in organizational psychology and a PhD in economics. Jan Damsgaard is an associate professor at the Department of Computer Science, Aalborg University. His research focuses on the diffusion, design and implementation of networked technologies such as intranet, e-commerce, EDI, Extranet, Internet and ERP technologies. He has presented his work at international conferences (ICIS, ECIS, HICSS, IFIP 8.2. and 8.6) and in international journals (Journal of AIS, Journal of Global Information Management, Journal of Strategic Information Systems, Information Systems Journal, European Journal of IS, Information Technology and People, Journal of Organizational Computing and Electronic Commerce). Dr. Damsgaard has a master’s in computer science and psychology and a PhD in computer science.
This case was previously published in F. Tan (Ed.), Cases on Global IT Applications and Management: Successes and Pitfalls, pp. 56-68, © 2002.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Corporate Collapse and IT Governance
187
Chapter XII
Corporate Collapse and IT Governance within the Australian Airlines Industry Simpson Poon Charles Sturt University, Australia Catherine Hardy Charles Sturt University, Australia Peter Adams Charles Sturt University, Australia
EXECUTIVE SUMMARY
In September 2001, Australia’s second largest airline, Ansett Airlines, went into voluntary administration. Besides the holding company, Ansett also had a number of subsidiaries operating in the regional (non-capital city) areas. Despite its small size, Kendell Airlines, a subsidiary of Ansett, had been a strong competitor to the nation’s largest airline, Qantas, in the regional markets. Even when the Ansett group was grounded, Kendell was still able to resume operating in a reduced capacity due to its self-reliance in many parts of its operation, including its information systems (IS) and the way it deployed information technology (IT). The purpose of this case study is to highlight IS/IT governance arrangements in multilayered organisations for the purpose of advancing knowledge about the effectiveness of such arrangements when, as in this case, the holding company collapses. The Kendell case is an example of a smaller entity with its own IS/IT governance embedded in a larger holding company with separate governance practices. Such arrangements raise questions relating to issues such as what most appropriate style and form of governance Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
188 Poon, Hardy, & Adams
is required, whether benefits may be derived from multi-tiered governance systems for corporate groups, and if synergies can be created. The insistence of the Kendell IT manager and management team on autonomy in Kendell’s IS/IT governance proved to be an integral element in its corporate resilience during the collapse of the holding company.
ORGANISATION BACKGROUND Company Background
Kendell Airlines was established as Premiar Aviation in 1967 by Don and Eilish Kendell. In 1971 they took over the Wagga Wagga1 to Melbourne route from Ansett Australia and changed the name to Kendell Airlines. Over the next 20 years, Don and Eilish Kendell built their airline into Australia’s largest and most successful regional airline, recognised internationally as a leader in its field, being named Regional Airline of the Year in 1991 by the U.S. aviation magazine Air Transport World. It was the first airline in Australia to win such a prestigious award. In 1990 Ansett Airlines extended its existing close relationship with Kendell by purchasing 100% of the airline. Although now owned by Ansett, Don Kendell remained as chief executive office (CEO) and chairman of the board, meaning that the day-to-day management independence of Kendell remained. In July 1998 Don Kendell retired as CEO, but remained as non-executive chairman of the board. While Kendell’s financial performance had been strong for the previous eight years, this contrasted with Ansett’s weaker financial performance during this period. The period between 1996-2001 was a turbulent one for aviation in Australia and particularly for the Ansett group. In 1996 TNT sold its 50% stake in Ansett to Air New Zealand, with News Limited as the other 50% owner. In January 1997 News Limited appointed Rod Eddington as CEO of Ansett to improve its financial performance. Eddington, who is now CEO of British Airways, had a brief chance to increase the value of Ansett Holdings so News Limited could get a reasonable return for the planned sale of its 50% stake. How this short-term financial goal affected the overall strategic planning process is beyond the scope of this case study, but there is little doubt that the drastic changes and downsizing required in Ansett to improve its financial position had a less than positive impact on the relationship between Ansett Airlines and Kendell Airlines’ management. After a protracted period of negotiations between News Limited and Singapore Airlines during 1999-2000, Singapore Airlines made an offer to purchase News Limited’s 50% holding of Ansett. Air New Zealand exercised its option of first-right-of-refusal on the sale and in 2000 purchased News Limited’s 50% stake, giving them 100% ownership of Ansett. At a more micro level, there were three crucial factors that were influential in the relationship between Ansett and Kendell management between 1996-2001: 1. 2. 3.
The South-Eastern Regional Strategy (SERS); The cultural differences between Ansett and Kendell; and The retirement of Don Kendell in 1998 as CEO of Kendell.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Corporate Collapse and IT Governance
189
Each of these areas are addressed separately in the following sections.
The South-Eastern Regional Strategy (SERS)
The cornerstone of the plan to make Ansett more profitable was for Ansett Airlines to handover to its regional subsidiary, Kendell, routes that it could not fly profitably. Most of these routes involved Ansett either withdrawing its services altogether, as done in some Tasmanian destinations, or Ansett would only fly the peak services in the morning and at night, while Kendell flew all other services. The implementation of the SERS meant Kendell would double the size of its operation between 1998-2001. The problem for Ansett was for Kendell to grow, Ansett had to shrink. While this plan was agreed to by the senior management and boards of Ansett and Kendell, when the implementation started, the operational management members of Ansett were not keen to downsize their areas of responsibility and hand them over to Kendell. The political power Ansett had being the larger incumbent made Kendell’s position almost untenable across a range of services, including engineering, flight training, ground handling, catering, network support, airport services, and information technology. Concomitant with the SERS implementation were three major management restructures in Ansett between 1996 and 2001, meaning Kendell management found themselves dealing with new senior management at Ansett and consequently having to cover issues Kendell felt were previously resolved.
The Cultural Differences
Ansett and Kendell Airlines had contrasting corporate cultures. Kendell had a culture instilled from its formative days by Don and Eilish Kendell, where every dollar counted and controlling costs in all areas of the business was imperative. Even during the rapid growth planned for the SERS implementation, Kendell was still only growing from a base of 400 staff in 1996 to 1,000 by 2001. Despite this growth, Kendell remained small enough that staff still knew the roles of other departments and many of the people in those departments. Another critical effect on the culture of Kendell was the fact that its head office was based in the regional city of Wagga Wagga. While Kendell had staff based in and flew to the capital cities of Sydney, Melbourne, Canberra, Adelaide, and Hobart, it still maintained its regional culture. In contrast Ansett’s operation was very much the city-based corporate culture. Ansett was used to operating on a much larger scale, with 16,000 staff spread throughout Australia. This sheer size for Ansett created its own complexities in operating its business and had cultural implications for the company’s relationship with Kendell. There was an underlying assumption in the Ansett management of “bigger is better”, and Kendell did not have the political influence to win many “battles” at the operational levels of management. It was not lost on the Ansett operational management that the growth of Kendell during the implementation of the SERS required a larger downsizing of the Ansett operation in terms of staff cuts. The cultural divide and the political strength of Ansett management, combined with the projected staff losses at Ansett, made the implementation of the SERS as originally envisioned almost impossible.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
190 Poon, Hardy, & Adams
Don Kendell’s Retirement as CEO
Don Kendell was a determined and widely respected business leader. When he retired from the CEO role in 1998, there is no doubt that some managers at Ansett saw this as an opportunity to try and exert influence over Kendell’s operations, which they would not have attempted while Don Kendell was still in control. As Ansett’s financial position deteriorated, the urgency to implement the SERS increased. Kendell purchased 12 50-seater jets during 2000-2001. This expansion would effectively double the size of Kendell Airlines. Unfortunately, due to the accumulated debts of the holding company, Ansett Holdings finally collapsed in September 2001, just as Kendell commissioned the last of its 12 new jets (see Figure 1). This left all airlines within the Ansett group grounded until the administrators could be convinced of their financial viability. Kendell started limited services with its turboprop fleet of 23 aircraft (see Figure 2) and was ultimately sold to Regional Express in August 2002. The 12 jets were all returned to the manufacturer. Ownership changes, management restructuring, and the Ansett Millennium program combined to bring more of a standard group focus to IT operations across all airlines within the Ansett group and particularly at Kendell during the implementation of the South-Eastern Regional Strategy.
Figure 1. Kendell CRJ fleet back at the manufacturer’s base
Figure 2. Kendell Saab 340 (turboprop)
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Corporate Collapse and IT Governance
191
SETTING THE STAGE
The purpose of the following case study is to highlight the IS/IT governance arrangements in a multi-layered organisation for the purpose of advancing knowledge about the effectiveness of such arrangements when the holding company collapses.
IS/IT Governance: Theoretical Aspects
There is a diversity of approaches and emphasis with respect to IS/IT governance (see for example IT Governance Institute, www.ITgovernance.org), which broadly speaking seeks to address the question, “How should firms organise their IT activities in order to manage the imperatives of the business and technological environments …?” (Sambamurthy & Zmud, 2000). The quality of the IS/IT governance contribution to organisational performance may vary in terms of how it is utilised and its “application effectiveness”, which in turn may be affected by the form of the governance adopted, market, sector, and societal contextual factors such as regulations, standards, and culture, as well as the chief information officer’s (CIO’s) relationship with members of the “governing body” (Korac-Kakabadse & Kakabadse, 2001) and other business units (Enns, 2003). The growing demand for attention to corporate responsibility and governance following the aftermath of corporate collapses witnessed in both Australia, such as Ansett, and internationally over the past two years, has among other things, highlighted that governance considerations relate equally to the application of IS/IT as to the whole organisation (Korac-Kakabadse & Kakabadse, 2001). Traditional conceptualisations of the “organising logic” for controlling and coordinating “spheres” (Sambamurthy & Zmud, 1999) or “domains” (Weill & Woodham, 2002) of IT activities, incorporating IT principles, IT infrastructure, IT architecture and IT investment (p. 14), to technologically exploit business markets have focused primarily on governance structures (Sambamurthy & Zmud, 2000) and attributes (Korac-Kakabadse & Kakabadse, 2001). Sambamurthy and Zmud (1999) advocated three “dominant” governance structures: centralised, decentralised, or federal. The centralised governance mode is defined as one in which corporate IS has the authority for all the domains of IT activities. In the decentralised governance mode, divisional IS and line management assume authority for all IT activities. Finally, with the federal model of governance, both corporate IS and the business units assume authority for specific spheres of IT activities (p. 262). In addition, coordinating mechanisms such as IT councils, IT steering committees, and service level agreements are seen as “structural overlays to supplement” these architectures (Sambamurthy & Zmud, 2000). Korac-Kakabadse and Kakabadse (2001) described governance attributes as incorporating: “composition (e.g., size of the governing body, mix of different member’s demographics); characteristics (e.g., member’s experience, tenure, functional background, stock ownership); structure (e.g., organisational configurations, leadership, flow of information between members); process (decisionmaking activities, style, frequency and length of meetings, formality of proceedings, and culture of evaluation of IS/IT performance)” (p. 11). More recently Sambamurthy and Zmud (2000), in examining the “organizing logic” most appropriate for managing IT activities, offered a “platform metaphor as a guiding” concept rather than governance structures (p. 107). Drawing on this metaphor, IT activities are considered “first and foremost, as the establishment of a ‘platform’ that Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
192 Poon, Hardy, & Adams
Table 1.Comparison of the control and stakeholder IS/IT governance model (KoracKakabadse & Kakabadse, 2001) Control model Hierarchical mechanism in place for allocation/control of IS/IT resources CFO/CIO are fiduciaries of claims CFO/CIO select IS/IT strategy that aids maximisation of shareholders’ wealth Profitability and cost efficiency are the benchmark for IS/IT efficacy
Stakeholder model Vertical, horizontal, and social mechanisms are used for allocation/control of IS/IT resources IS/IT governing body is fiduciary of variety of claims IS/IT governing body approves IS/IT strategy for the purpose of balancing pluralistic aims Profitability and cost efficacy are important in addition to sustainable growth and IS/IT reliability/stability
provides a rich ensemble of current and future IT-enabled functionalities [whereby] the increasingly dynamic and tightly bounded decisions of how and where to distribute decision authority for specific IT-enabled functionalities are addressed, [with] ‘each arrangement improvised to exploit specific business opportunities’” (p. 108). The three “essential building blocks” of the platform organising logic consist of: IT capabilities (ITbased assets and routines that support value adding activities of business), relational architectures (intra- and inter-organisational relationships), and integration architectures (“organisational overlays which bind relational architectures together”) (pp. 108109). In examining the contribution of IS/IT governance to company performance, KoracKakabadse and Kakabadse (2001) adopted a broader perspective of governance from one focused solely on economic or business imperatives such as shareholder returns to one which “attempts to integrate broader stakeholder needs” incorporating a broader societal context (p. 10). Key concepts are outlined in Table 1. Decisions as to which governance mode or model to use may be influenced by the optimisation of business objectives, such as enterprise-wide economies and efficiencies, localised business needs, organisational contexts such as the prevailing corporate governance architectures and the performance of the central IT function in meeting client needs (Sambamurthy & Zmud, 2000), and different roles that IS/IT governance may assume in decision making (Korac-Kakabadse & Kakabadse, 2001). Korac-Kakabadse and Kakabadse (2001) outline four such roles, namely: “service roles (e.g., nature of networked relationships, control of IS/IT resources, ceremonial, formulation, and implementation of decision making); control roles (safeguard corporate interests, select CIO, monitor IS/IT performance, review CIO’s analysis); strategic roles (e.g., guide, develop, implement, and monitor IS/IT’s strategy, resource allocation); or their combination” (p. 10). In addition, similar considerations may influence IS/IT governance capability (Korac-Kakabadse & Kakabadse, 2001). Van der Heijden (2001) outlines four behaviours that reflect this capability. First, the quality of the executive relationship between the CIO and other executives. Second, the ability to arrive at shared objectives involving the alignment between business objectives and IT objectives, which may be intellectual Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Corporate Collapse and IT Governance
193
(similarity between business and IT plans) and/or social (whether business and IS executives understand each other’s objectives and plans (Reich & Benbasat, 1996; c.f., Van der Heijden, 2001). Third, fostering an appropriate culture in the IS department. Fourth, incorporating best practices, that is the “search for continuous improvement of processes” (p. 15). The diverse nuances of governance discussed earlier suggest that it is not captured by any singular notion of government, control, management, or accountability. Therefore, considerable diversity exists in IS/IT governance arrangements, practices, and capabilities across different organisations (Sambamurthy & Zmud, 1999).
Information Technology at Kendell Airlines
Kendell had significant revenue growth from a base of $30 million in 1990 to over $110 million in 1998. Before the Ansett collapse in September 2001, Kendell Airlines had projected revenues of over $200 million for 2002. However, despite its large revenue base, Kendell Airlines did not appoint its first IT manager/CIO until the beginning of 1998. While Kendell’s customers could make flight reservations through Ansett’s Computer Reservations Systems (CRS), all other IT-related business functions, including financial and engineering systems, were managed in-house by Kendell. The implementation of the South-Eastern Regional Strategy required Kendell to implement IT systems in areas it previously had not had a business requirement. One of these was an operations system which monitored all aircraft in-flight and dealt with issues due to schedule changes and delays as they happened. After evaluating systems from around the world, Kendell selected a recently developed operations system from a New Zealand-based company. This software has since been implemented by all the Qantas regional airlines and Virgin Blue when they entered the Australian market. For the Kendell IT manager, another layer of implementing this software was having to justify why the existing Integrated Operation Centre (IOC) at Ansett could not take on the role for Kendell. Like many decisions on IT systems for Kendell, the cost of implementing Ansett systems was prohibitive. In the big picture Kendell running its own operations system meant costs were being removed from Ansett as was required by the SERS. The smaller size of Kendell meant the strategic decisions of senior management were implemented by those same managers, so they clearly understood the group imperatives. In contrast, Ansett had senior managers making strategic decisions that were being implemented by the operational level of management, whose members were not keen to sacrifice their part of the business to Kendell. Another angle to this problem was the requirement of Ansett IT that Kendell have standard desktop, server, and network equipment before Kendell’s network could be connected to Ansett’s. In theory this meant Kendell would have to replace its entire IT infrastructure before it could seamlessly connect to the Ansett network. This was not a cost Kendell management was willing to sign-off on, so a stalemate ensued from late 1999 until 2001, while Kendell put forward seven versions of a business case to implement its own network linked to Ansett’s. This is not to say there were not advantages in having a standardised operating environment across the whole Ansett group. A standardised environment would give Kendell staff access to all Ansett systems and make the exchange of information more seamless. The main objection of Kendell management was the higher costs could not justify the increased functionality. Kendell management felt Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
194 Poon, Hardy, & Adams
Figure 3. Ansett Holdings organisation chart Ansett Holdings Pty Ltd
Ansett Airlines
Ansett International
Ansett Freight Group
Kendell Airlines
Skywest Airlines
Aeropelican
TII (Transport Industry Insurance)
once it lost its ability to control the cost of providing IT, it became a risk to its profitability. This control of costs was a critical issue to Kendell management across its business, not just in IT.
IS/IT Governance: Issues in Practice
Under the Ansett Holdings legal company structure, there were six airlines as separate entities (see Figure 3). The largest company was Ansett Airlines and in day-today operations it functioned as though it was the overall parent company (Ansett Holdings). The three Regional Airlines in the Ansett group were Kendell, Skywest, and Aeropelican. These airlines were cost-focussed operations, which ran profitably and with a high degree of autonomy up until 1998. Despite the legal structure of autonomous companies, Ansett Airlines had the size and political power to influence all other airlines within the group structure. During 19982000, there was strong pressure from Ansett IT for Kendell to adopt Ansett’s IT standards in the desktop, software, network, and server environments. Kendell management did not see this as being cost effective and resisted. As a result Ansett IT would not agree to let Kendell seamlessly connect to the Ansett network, which was essential for the SERS to be fully implemented. An ironic twist to this political battle was that the IT standards proposed by Ansett IT involved Kendell replacing its existing Microsoft
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Corporate Collapse and IT Governance
195
Exchange-based e-mail system with a Lotus Notes implementation as used by Ansett. When Air New Zealand gained 100% ownership of Ansett in 2000, Air New Zealand IT replaced Ansett’s Lotus Notes system with Microsoft Exchange. While the change in IT management within the corporate structure after Air New Zealand’s purchase helped Kendell in some areas, the short-term focus in 2000-2001 was on integrating the Ansett and Air New Zealand systems. The needs of the subsidiary airlines joined the queue of other projects to be revisited once the integration was finalised. Unfortunately due to the business collapse in September 2001, these projects never made it to fruition. The biggest loss for Kendell with the ownership change was the political alliances that the Kendell IT manager had cultivated during 1998-1999. While the previous Ansett CIO did not agree with Kendell’s position on many issues, there was open communication, with the Kendell IT manager welcome at any of Ansett IT’s senior management meetings. With the management restructure following the Air New Zealand purchase, the Kendell IT manager had to start rebuilding these relationships, which took more than 12 months to develop. During the expansion period in 1998-2001, when Kendell was implementing the SERS by introducing 12 new 50-seater jets, the relationship between the holding company Ansett and Kendell needed to strengthen. The reality was the relationship became significantly strained over time in many areas, including IT, due in part to Ansett’s deteriorating financial position and management restructuring. Much of this was due to culture, politics, and power relations. Previously Kendell and Ansett had worked in partnership on many routes, but the South-Eastern Regional Strategy meant Ansett was having to hand over parts of its business to Kendell. This required downsizing was the underlying cause of many of the problems in the relationship between Ansett and Kendell management. As previously discussed Kendell operated all its IT systems (except the CRS) independent of the holding company with a very tight focus on costs. As the SERS was implemented, Kendell and Ansett staff had an increasing requirement for day-to-day interaction in operating the businesses. This meant the IT systems needed to integrate seamlessly. Due to the fairly autonomous operations of Kendell historically, there was no clear reporting line in the IT area between the subsidiary and the corporate entity of Ansett. Due to the large size of Ansett’s IT operation, the Kendell IT manager took a pragmatic approach and formed relationships with key managers in specific areas of interest to Kendell. Despite supporting a business which employed over 900 people and operating 35 aircraft in six of Australia’s eight states/territories (see Figure 4), Kendell only had three full-time IT staff — approximately one IT person for every 300 staff. This contrasted with more than 500 full-time IT staff in Ansett supporting a business employing 16,000 people — approximately one IT staff member for every 32 staff. The rapid growth of Kendell from 1999 onwards meant IT had to be managed proactively. Critical to the success of this were two factors: •
The Kendell IT manager was a key member of the senior management team within Kendell, with a good working relationship with the other senior managers in the executive offices.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
196 Poon, Hardy, & Adams
Figure 4. Kendell’s route map
•
Kendell didn’t try to do everything in-house. The IT manager built partnerships with specialist contractors around Australia to provide the geographical and technical coverage needed to operate an airline.
Underpinning all of this was the ability to assess each situation on its merits and not try to apply a “one-size-fits-all approach” to managing the IT issues. With the benefit of hindsight, Kendell’s insistence on autonomy in its IT governance contributed to its ability to continue operations after the collapse of Ansett. This may be easier to do in a smaller organisation like Kendell, where more direct cost/benefit scenarios can be constructed. Overall Ansett recognised the value of IT in its organisation, but the disconnection between the strategic policy decisions and the operational implementaCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Corporate Collapse and IT Governance
197
tion meant Ansett middle managers had their own “empires to build”, which did not always align neatly with the overall strategic direction of the group. As previously discussed, the relationship with senior Ansett IT managers took time to build, and when there were management restructures, this process had to start all over again. These management restructures occurred three times in the five years leading up to the collapse of Ansett. The high turnover of senior management at Ansett meant subsidiary managers were continuously trying to build new relationships with managers from whom they were geographically isolated.
CASE DESCRIPTION
Kendell’s story is a tale of a successful small company being purchased by a larger organisation to enhance the parent’s market offering, then the parent attempts to impose its standards on the already-successful company. The challenge for IS/IT governance is how to manage different IT cultures without automatically assuming that the larger organisation has the best systems and practices, due to its size, resources, and/or political power. A further complication arises when we consider situations involving takeovers or mergers of companies that have their own physical (e.g., IS infrastructure), and social (e.g., organisational, IS governance) structures, cultures (e.g., controlling, collaborative), and routines (e.g., procedures based on unarticulated knowledge). These issues raise different questions such as: What is the most appropriate style and form of governance? Are there benefits in multi-tiered governance systems for corporate groups? Can synergies be created? How are such situations managed or controlled? The following discussion examines some key themes from the Kendell situation to facilitate some debate about these questions raised. While each of the areas are discussed separately, this is for ease of reading as they are in reality interrelated.
Plural Realities and Multiple Layers: Establishing the Common Ground
In a large organisation with many divisions and subsidiaries, a challenge in IS/IT governance is how to build governance systems that fulfil strategic business goals of every part of the organisation while ensuring standards of quality, usage, and cost across the entire organisation (Koch, 2002) given the different size, structure, and cultures of organisational units and the nature of their work. There has to be business ownership of technology investments to ensure consistency with each business unit’s objectives, whilst ensuring that the returns from technology decisions are acceptable for the organisation as a whole, that is, some “common ground” needs to be established (Balliet, 2003). Establishing such common ground is a complex task when there are plural realities and multiple layers within an organisation. The IS/IT governance structure at Kendell represented a hybrid of centralised, decentralised, and federal modes of governance arrangements. That is, a centralised governance structure existed at the holding company level, which impacted on the ticketing reservation system that was used throughout the airline. However, at the subsidiary level, all other IS systems were implemented and maintained by Kendell, reflecting a decentralised arrangement. Further within Kendell itself, a federal model of Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
198 Poon, Hardy, & Adams
governance existed, in that the Kendell IT manager worked with the business units in determining the IT capabilities required to best manage their business activities. The small number of IT staff at Kendell reflected two dominant views: (1) since most of the IT decisions were made at the corporate level, “we only need a small team to oversee the implementation of such decisions, instead of actively exploring the most cost-effective opportunities”; (2) IT is an expenditure item, therefore, only the minimum needed to be spent was budgeted. The IS/IT governance arrangements within Ansett tended to mirror its corporate governance arrangements largely due to its size. This was particularly the case during the troubled periods prior to it going into voluntary administration with continual management restructuring and tightening of control over activities within the organisation. Such arrangements created a number of issues for Kendell. The power and politics during the positioning of Kendell’s IT group within the Ansett IT governance structure was not without problems. The massive Ansett IT arm had been increasingly determining most of the IT/IS governance structure within the corporate group, and as a regional subsidiary, Kendell’s view was not paid due consideration. Corporate standards were progressively being applied across the whole group. The relative cost/benefit to each subsidiary was not analysed. The IT configuration, which reflected the business needs of Kendell, might not have been fully understood at the corporate level. Thus while a unified approach to IT governance may provide control and uniformity, it may lessen the importance of local needs. While such was the reality, Kendell had not been more proactive in pursuing a more strategic position during the formative stage of the IT/IS governance framework. Despite the efforts of the newly appointed Kendell IT manager, often the centralised IS/IT governance practices made his position doubly difficult, particularly when it came to key decisions which affected the corporate level of IT.
Boundary Management and Economies of Scope: What IT Capabilities Need to be Company Wide?
During a site visit to the Head Office of Kendell (located at their regional base) in 1999 as part of the Year 2000 compliance program, a consultant from one of the world’s large consulting companies asked why Kendell did not use the same major enterprise software package as Ansett for its financial management reporting. The IT manager answered: “Because Ansett can’t give you a route-by-route profitability report at any time from their systems, and we have ours out four days after the close of the reporting period by extracting the data from our finance systems and running it through some very good spreadsheets we have developed ourselves”. The lesson here is, it is not what product is being used, but rather “the capability to integrate IS/IT effort with business purpose and activity” (Feeny & Willcocks, 1998). While an IT system that is ineffective or does not work is “useless”, it does not mean that every system “must be wrapped in gold-plated functionality” (Ross & Weill, 2002). It is common to place too high a value on the credibility of the package being used. A well-known brand enterprise software package, poorly configured, derives no further benefits than a simplistic package. In the case of Kendell, the IT staff were knowledgeable about the IS/IT capabilities critical for business success which, outside of the ticketing system, enabled them to influence what IT systems were adopted and how they were Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Corporate Collapse and IT Governance
199
managed. At the Ansett level, IT consultants were continually engaged by the company to develop more effective systems, such as the case with the adoption of the enterprisewide system by the holding company. While the credibility of the system was not in question, the way it was configured meant that it did not deliver the information that Kendell believed was critical in managing its business. For example, in determining the profitability of each travel route, the information was downloaded into a spreadsheet for analysis, as opposed to adopting the enterprise-wide system in place at Ansett. While centralising IT capabilities and standardising IT infrastructure across an organisation may leverage significant cost savings and strategic benefits, standards may restrict the flexibility of individual business units (Ross & Weill, 2002). Further, the feasibility of attaining economies of scope through sharing resources across multiple products/services (Sambamurthy & Zmud, 1999) may be influenced by the mode of diversification, that is whether the major drivers of growth strategy are related to internal expansion or external acquisition (p. 266). Firms pursuing internal growth are expected to leverage existing technology investments and existing technology competencies throughout their organisation, which would be the easiest to accomplish when IT resources and decisions are “centrally orchestrated” (p. 266). This was the situation at Kendell before it was acquired by Ansett. Following the acquisition, problems were experienced between Ansett and Kendell when trying to integrate IT activities, largely due to cultural and political factors of “bigger is better”. Therefore, decisions made at the corporate IS level tended to dominate, notwithstanding that local IS staff at Kendell were better placed to support Kendell’s work processes and business strategies.
Risk Management and Business Continuity: What Happens when the Holding Company Collapses?
An element of “good” IS/IT governance is risk management, to minimise the impact of for example financial, technological, security, legal, physical risks, and hazards. In the event of such events occurring, organisations need to ensure that their staff and IT systems can carry on operations. The case of Kendell presents an interesting tale, as it raises the question of what to do when the holding company fails and a key strategic information system, that is in this context the customer reservation system, is shared across all business units. Given the top-down IT governance, the ownership and operation of the corporate IT infrastructure was often decided at the Ansett corporate level. A top-down approach may work well if there is a strong governance directive at the top (corporate) level. However, once the corporate level has difficulties providing the needed directive, often in the case of an approaching corporate collapse, then such governance approaches weaken the subsidiary’s ability to survive. For example the continual restructuring that was occurring at Ansett created difficulties in forming and maintaining relationships with senior management. Further, if the corporate collapse leads to the liquidation of the IT resources, the subsidiary may be left with no IT infrastructure to continue operations. In the case of an airline, the closing down of the computer reservation system translates into the closing down of its business. The fact that Kendell was able to restart its operations while Ansett remained grounded was in part due to the fact that it could run as a standalone business as long as it had a CRS in place. This was also the case for two smaller subsidiary regional airlines, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
200 Poon, Hardy, & Adams
Skywest and Aeropelican. Kendell had a contingency plan in place to start using the same outsourced CRS as Virgin Blue (a recent entrant into the Australian market). As it transpired, the administrators of Ansett kept the Ansett CRS running so all the subsidiaries able to fly at a profit could restart operating during the administration period.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANISATION
A deal was negotiated by a new consortium to purchase Hazelton (a smaller regional airline) and Kendell from the Ansett administrators to form a new business called Regional Express (REX). The key issue facing this new entity is how the systems and infrastructure of Kendell can be successfully disentangled from its past and become an equally vibrant player in the new set up. REX will be trying to merge four sets of IT governance values, these being: the new senior management group installed by the owners, the ex-Ansett staff who have been employed, the ex-Kendell staff, and the exHazelton staff. Issues relating to IT governance re-appear, and questions as to how the new consortium may learn from its failed predecessor and implement an effective IT governance policy arise. For example, one of the key problems is which CRS will ultimately be used. This decision cannot be made until the role of each of the entities within the new consortium is defined — a “messy” practical reality involving issues of culture, power, and physical and social structures. The consolidation and streamlining of operations needs to be considered in conjunction with IT strategic decisions, as well as market, sector, and societal issues, as these are mediating factors for IS/IT governance effectiveness. Therefore, one of the challenges for REX is to ensure that they adopt a governance model that considers these issues, such as a stakeholder type model of IS/IT governance in contrast to a control type model. There are two reasons why such an approach may be more appropriate. First, given REX is not aiming to be a nationwide carrier but is focusing on niche regional sectors, the stakeholders are part of an alliance rather than a holding company structure, as was previously the case with Ansett. Second, given the alliance approach to REX’s formation, it is likely that there are “pockets” of existing IT infrastructure that are not from the same hardware vendors. Instead of investing heavily to formalise these pockets through a standardisation approach such as replacing hardware, it is more feasible to adopt an integrative approach that allows systems integration (at the application level) to take place. This allows the limited resources to be used effectively for the immediate needs. It is critical to realise the correct approach is dependent in part on prevailing economic and market conditions. There is no doubt REX is trying to establish itself during a very tough period in the aviation sector worldwide. Although a stakeholder model to IT/IS governance for REX is recommended, management of REX are still faced with the future development path. Is the stakeholder model sufficient as REX grows in size and its endeavours? Would the stakeholder approach to IT/IS governance lead to a fragmented IT/IS governance framework being
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Corporate Collapse and IT Governance
201
influenced by many conflicting factors? Is there a certain “changeover” point from the stakeholder model to the control model? These are some of the issues REX management needs to investigate.
REFERENCES
Baillet, M. (2003, April 1). Getting the twain to meet. CIO. Enns, H. G. (2003). CIO lateral influence behaviours: Gaining peers’ commitment to strategic information systems. MIS Quarterly, 27(1), 155-175. Feeny, D. F., & Willcocks, L. P. (1998). Core IS capabilities for exploiting information technology. Sloan Management Review, 31(4), 121-127. IT Governance Institute. Board Briefing on IT Governance. (n.d.). Retrieved April 28, 2003, from http://www.itgovernance.org Koch, C. (2002). The powers that should be; IT decisions have to reflect the goals of the business and engage the attention of the business, often without the participation or even the interest of the business. Sound hard? It is. But here’s what you can do to make it easier. CIO, 15(23), 48-54. Korac-Kakabadse, N., & Kakabadse, A. (2001). IS/IT governance: Need for an integrated model. Corporate Governance, 1(4), 9-11. Reich, B. H., & Benbasat, I. (1996, March). Measuring the linkage between business and information technology objectives. MIS Quarterly, 55-81. Ross, J. W., & Weill, P. (2002, November 1). Six IT decisions your IT people shouldn’t make. Harvard Business Review. Sambamurthy, V., & Zmud, R. W. (1999). Arrangements for information technology governance: A theory of multiple contingencies. MIS Quarterly, 23(2), 261-290. Sambamurthy, V., & Zmud, R. W. (2000). Research commentary: The organizing logic for an enterprise’s IT activities in the digital era — A prognosis of practice and a call for research. Information Systems Research, 11(2), 105-114. Sohal, A. S., & Fitzpatrick, P. (2002). IT governance and management in large Australian organizations. International Journal of Production Economics, 75, 97-112. Van der Heijden, H. (2001). Measuring IT core capabilities for electronic commerce. Journal of Information Technology, 16, 13-22. Weill, P., & Woodham, R. (2002). Don’t just lead, govern: Implementing effective IT governance (Working Paper No. 4237-02, 1-14). MIT Sloan School of Management. Williamson, O. E. (1999). Strategy research: Governance and competence perspectives. Strategic Management Journal, 20, 1087-108.
ENDNOTE
1
Wagga is a regional centre in the State of New South Wales with a population of close to 60,000 people.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
202 Poon, Hardy, & Adams
Simpson Poon is chair professor in information systems at Charles Sturt University, Australia. He earned his PhD from Monash University, Australia. He has published widely in the area of e-business implementations among small businesses. Prior to this he was an associate professor and director of the Centre for E-Commerce and Internet Studies at Murdoch University, Australia, where he was involved in academic research and business consulting. His current research interests include e-marketing strategies and e-government. Catherine Hardy is a lecturer in accounting and e-business and a previous coordinator of e-business programs. She is currently completing her PhD from the University of New South Wales, Australia. She has taught in the areas of e-business, auditing, CIS auditing, finance and international accounting. Prior to her appointment she worked in a major Australian bank for eight years of which five were spent in the Financial Controller’s Department. Her main areas of expertise are e-business, IS management and audit. Peter Adams is a lecturer in information technology. Prior to his appointment, he was the IT manager of Kendell Airlines. He has many years of experience in the IT industry and a background in marketing and journalism. He is currently completing a PhD, specialising in the adoption and marketing issues related to broadband Internet as a medium.
This case was previously published in the Annals of Cases on Information Technology, Volume 6/2004, pp. 568-583, © 2004.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Social Construction of IT Supporting Work 203
Chapter XIII
Social Construction of Information Technology Supporting Work Isabel Ramos Universidade do Minho, Portugal Daniel M. Berry University of Waterloo, Canada
EXECUTIVE SUMMARY
In the beginning of 1999, the CIO of a Portuguese company in the automobile industry was debating with himself whether to abandon or to continue supporting the MIS his company had been using for years. This MIS had been supporting the company’s production processes and the procurement of resources for these processes. However, in spite of the fact that the MIS system had been deployed under the CIO’s tight control, the CIO felt strong opposition to the use of this MIS system, opposition that was preventing the MIS system from being used to its full potential. Moreover, the CIO was at lost as to how to ensure greater compliance to his control and fuller use of the MIS system. Therefore, the CIO decided that he needed someone external to the company to help him understand the fundamental reasons, technical, social, or cultural, for the opposition to the MIS system.
THEORETICAL BASIS FOR THE STUDY
Innovative, organization-transforming software systems are introduced with the laudable goals of improving organizational efficiency and effectiveness, reducing costs, improving individual and group performance, and even enabling individuals to work to their potentials. However, it is very difficult to get these software systems to be used Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
204 Ramos & Berry
successfully and effectively (Lyytinen et al., 1998; Bergman et al., 2002). Some people in some organizations resist the changes. They resist using the systems, misuse them, or reject them. As a result, the goals are not achieved, intended changes are poorly implemented, and development budgets and schedules are not respected. Misguided decisions and evaluations and less than rational behavior are often offered as the causes of these problems (Norman, 2002; Dhillon, 2004). Bergman, King, and Lyytinen (2002) observe (p. 168), “Indeed, policymakers will tend to see all problems as political, while engineers will tend to see the same problems as technical. Those on the policy side cannot see the technical implications of unresolved political issues, and those on the technical side are unaware that the political ecology is creating serious problems that will show up in the functional ecology”. They go on to say (p. 169), “We believe that one source of opposition to explicit engagement of the political side of RE [Requirements Engineering] is the sense that politics is somehow in opposition to rationality. This is a misconception of the nature and role of politics. Political action embodies a vital form of rationality that is required to reach socially important decisions in conditions of incomplete information about the relationship between actions and outcomes”. The implementation of complex systems, such as enterprise resource planning (ERP) systems, are rarely preceded by considerations about: •
• • •
The system’s degradation of the quality of the employees’ work life, by reducing job security and by increasing stress and uncertainty in pursuing task and career interests (Parker & Wall, 1998, pp. 55-70; Davidson & Martinsons, 2002; Thatcher & Perrewé, 2002); The system’s impact on the informal communication that is responsible for friendship, trust, feeling of belonging, and self-respect (Goguen, 1994; Snizek, 1995; Piccoli & Ives, 2003); The power imbalances the system will cause (Bergman et al., 2002; Dhillon, 2004); and The employees’ loss of work and life meaning, which leads to depression and turnover (Parker & Wall, 1998, pp. 41-49; Bennett et al., 2003; Davison, 2002).
Recent work by Krumbholz et al. (2000) considers some of these issues after implementation of ERP systems. Specifically, this work investigates the impact on user acceptance of ERP-induced organizational transformation that results from a mismatch between the ERP system’s actual and perceived functionalities and the users’ requirements, including those motivated by their values and beliefs (Krumbholz et al., 2000; Krumbholz & Maiden, 2001). This case study describes an on-site examination of one particular ERP-induced organization transformation. The prime champion of the ERP system in one company was surprised by the resistance to the system’s use shown by the employees of the company. He ended up asking the help of the first author of this case study to understand the sources of this resistance and what to do about it. The present report is a distillation of the first author’s final report to the champion and of her PhD dissertation (Ramos, 2000). The focus of the study is on understanding the technological, social, and cultural reasons of the employees’ resistance against the ERP.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Social Construction of IT Supporting Work 205
ORGANIZATIONAL BACKGROUND
The reader should consult the organizational charts shown in Figures 1 and 2 in Appendix I as he or she is reading the following narrative. With the exception of the proper name Isabel, that of the first author, none of the proper names in the narrative, including of the company and of software products, is real. However, each proper name does refer to one real person, company, or product that participated in the events narrated. One cold morning in February 1999, Pedro was talking with a friend, Sérgio, about Pedro’s professional path as director of the Information Systems (IS) Department of ENGINECOMP, a Portuguese subsidiary of a Brazilian company in the automobile sector. Pedro was not able to decide what else could be done to ensure that MaPPAR, the information system that was currently supporting the management of production processes and the procurement of the associated resources, was used to its full potential. Pedro had started working for ENGINECOMP at its very beginning in 1992, when the Brazilian company’s administration decided to build a plant in Europe. They chose Portugal for its linguistic and cultural similarity, and for the relatively low salaries that would be paid to a young and educated workforce. See Appendix II for details about the Portuguese automotive sector and about the company. Rafael, the president of the Brazilian company, liked to have close control of the company and of all its branches. He felt especially comfortable with Pedro’s past experience and thus hired Pedro. He also admired Pedro’s creativity and close attention to details. Soon, Pedro earned the power to define the work processes, plant design, management practices, and the information systems that would support the work. The building of the plant was finished in the beginning of 1993, and the plant was already manufacturing seven months later. It produced a small but essential component for the car engines. This component requires a high-quality production line, since even a small variance in the required measures implies that the product would not be accepted by the client. The plant satisfactorily supplied several of the most important automobile builders in Europe. For each car model, the engineers of the plant, assisted by the client, designed the specific dimensions and shape of the product to be delivered each time it was ordered by the client. Simultaneously, the client and the plant administration negotiated the percentage of items that the client could cancel or add to an order already in production. This percentage was very important, since it helped the client to react quickly to a sudden drop or increase in the demand for that specific car model. For the plant, this variability added considerable difficulty to production planning, but the aggressive competitiveness of the automotive sector forced acceptance of these margins. The raw material was shipped from Brazil, where the mines were located. The material was shipped by sea and arrived at ENGINECOMP three months later. This delay forced careful stock management and production planning. Every time a new order arrived, ENGINECOMP’s capability to produce the required product was carefully assessed. Following this assessment, the manufacture of the required product was planned to fit the delivery dates agreed upon with the client, and production orders were drafted and delivered to the plant. The production processes were designed in accordance with international norms and the best practices of the
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
206 Ramos & Berry
sector. These processes were regularly subjected to quality controls, in both external and internal audits. Management of materials acquisition for the plant machinery was also an important task, since the plant could never stop or reduce its production from what was planned. Three employees of the Logistics Department were assigned to negotiate continually with actual or potential suppliers. These employees were responsible for keeping the materials at adequate stock levels. Whenever ordered products were finished, they were packaged and were stacked in the warehouse for delivery. The finished goods were then delivered to their clients by ENGINECOMP’s own trucks, by train, or by airplane. Pedro decided that the complexity of all these activities required an integrated production management system. He convinced the Brazilian administration to buy a wellknown and highly utilized off-the-shelf system, MaPPAR, to manage all the production processes and associated resources. MaPPAR was configured and deployed under Pedro’s tight control, with the assistance of the directors of several of ENGINECOMP’s other departments, namely, Engineering, Quality Control, Finance, Logistics, and Warehousing. Pedro told his friend, Sérgio, about all the difficulties and successes of the first two years. Pedro was really proud of his work and of his good decisions. ENGINECOMP was considered one of the best suppliers of its products in all of Europe and was supplying the most successful automobile manufacturers. However, MaPPAR was never used to its full potential. In fact, resistance to MaPPAR’s use seemed to grow steadily since its deployment. In parallel, Pedro’s authority was being continually questioned despite his many past good decisions, his hard work, and his undeniable contributions to ENGINECOMP’s success. During 1998, a large fraction of ENGINECOMP’s shares were sold to a German company. Now, the Brazilian company owned less than 40% of ENGINECOMP’s shares and was thus losing control over what happened at ENGINECOMP. By the end of 1998, the only relics left of the Brazilian company’s administration of ENGINECOMP were framed posters in the Brazilian Portuguese language encouraging employees’ creativity and participation in decision making. The new administrator, Fritz, distributed his control and support evenly over the directors of all departments that reported to him including the IS, Engineering, Quality Control, Finance, Logistics, and Warehousing departments. That is, Fritz treated Pedro for what Pedro really was, the director of the IS Department, one of many departments. Pedro resented this treatment, and he believed there was a close link between his loss of power and the growing opposition to the use of MaPPAR. He wanted to counter this opposition, but was having “difficulty reasoning unemotionally about the current situation and past events”. Then, Sérgio told Pedro about a researcher who might be interested and able to help Pedro analyze the situation. The researcher was Isabel, who entered the scene in early March 1999.
SETTING THE STAGE
When Pedro first talked with Isabel, he told her about an important movement within ENGINECOMP against the use of MaPPAR to support production management. MaPPAR Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Social Construction of IT Supporting Work 207
was very versatile and included modules that could support engineering tasks, order processing, production planning, assessment of production capabilities, production management, product shipment, management of stocks, accounts payable to suppliers, accounts receivable by clients, and finances and accounts management. While MaPPAR was a very complete and powerful system, it was very poorly used. Employees ignored or resisted using much of MaPPAR’s functionality, preferring to develop their own small systems and databases to manage only the information relevant to their daily tasks, and used the central MaPPAR database only as the source of data to feed their own small systems and databases. Moreover, the employees of the plant were refusing to input timely data about the tasks they perform. They preferred to defer their inputting until the end of the week or the month, so they would not lose time during the days. Sometimes, one of the employees was freed by his colleagues from his normal duties in order to input his and his colleagues’ data. The problem with this practice was that it became virtually impossible to track down the state of the orders in production. Pedro and Isabel agreed that Isabel would do an on-site study of ENGINECOMP at work, in search of the fundamental reasons, technical, social, or cultural, for the opposition to MaPPAR and for the proliferation of small systems and databases. Isabel would study the work of two departments in ENGINECOMP: the Finance Department and the Logistics Department. They were the most influential departments of the company, since they performed essential activities for the company. The Finance Department did financial management, and the Logistics Department did customer service and production planning. Moreover, the employees of these two departments constituted a majority of MaPPAR’s users.
CASE DESCRIPTION
Isabel spent five months at ENGINECOMP, observing and interviewing all the employees of the Finance and Logistics departments. Almost every day, Isabel spent several hours joining or observing employees performing their tasks with or without support from MaPPAR. She interviewed Carlos and Manuel, the directors of the departments. She also interviewed several middle managers responsible for key activities, including Fernando, Carlos’s closest collaborator in the Finance Department, and Roberto, Eduardo, and António, the managers of the Customer Service, Production Planning, and Purchasing divisions of the Logistics Department. Isabel also observed and interviewed Pedro and his collaborators in the IS Department. She talked also a few times with the German leader, Fritz and she was present at events such as meetings and training programs. To learn more about MaPPAR, Isabel consulted the available manuals and technical documentation. She used a demonstration version of MaPPAR to test some of its functionality on her own. Next is a department-by-department summary of Isabel’s observations.
Finance Department
The Finance Department was responsible for all financial and accounting tasks of the company. Carlos, the director of the Finance Department, had sole directorial responsibility for the department. However, he delegated supervision of the accounting Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
208 Ramos & Berry
tasks to Fernando, a trusted employee with an accounting degree. Fernando had access to key information and knowledge for performing these accounting tasks. Fernando’s access and knowledge, charisma, and core skills made him a privileged ally of Carlos, the director. As mentioned, the coordination and control of the Finance Department activities were responsibilities of Carlos, the director, who kept and centralized all decision making. Carlos was a Brazilian sent to ENGINECOMP by the Brazilian company’s administration. Fritz, the new German leader, was not very comfortable with Carlos’s complete control of the Finance Department. In the Finance Department, informal communication about work tasks was discouraged. Carlos’s rule was that all communication must be well documented. The work in the department was organized into well-defined tasks connected by clearly defined processes, all of which determined the precise responsibilities of each employee. Employees only occasionally received professional training. The heavy workload left little time for such training. Moreover, Carlos believed that each employee must perform simple and repetitive tasks that can be learned by doing. The tasks were distributed among eight employees, each with limited autonomy. Carlos had his trusted assistant Fernando, with the accounting degree, supervise the daily routine. Fernando was seen as a bright and ambitious young man. He was expected more to comply with rules and procedures than to make autonomous decisions. Fernando worked hard to serve Carlos’s interests, taking advantage of what he had learned in courses leading to his degree to further those interests. Fernando’s actions and information were especially useful to Carlos, who as a Brazilian, was not as familiar with Portuguese accounting regulations as a Finance Department director should be. It is I who signs the accounts, the financial reports, and the balance sheets. It is I who signs the fiscal statements. I am the officer responsible for the company’s accounting. My accounting degree and my understanding of Portuguese law allow me to give invaluable assistance to the director. (Fernando) With the help of Fernando, Carlos asserted the strong leadership that Carlos believed was the guarantor of efficiency and motivation. Compliance with Carlos’s leadership was the main criterion by which Carlos recruited employees for the Finance Department. Interviewed Finance Department employees mentioned some competition for career advancement. Finance Department employees were rewarded for accurate completion of assigned tasks. Failure to comply with rules and procedures was punished. There was a predominant belief among the employees that autonomous and creative employees were dangerous in a finance department. This belief was supported by past events during which fraud was perpetrated. This belief was one of the most important sources of the motivation to use MaPPAR, which was seen as reinforcing established practices: I am responsible for one area. So, I work in a well-defined set of tasks and my colleagues have clearly defined tasks as well. There is no confusion. The system requires that we are specialists in our tasks ... hmm ..., and it assures that our director has access to what we do, so we are not blamed for the mistakes [errors and fraud] of others. (An Employee of the Finance Department) Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Social Construction of IT Supporting Work 209
There was little variation in the daily routine of the Finance Department. All needed information was in MaPPAR’s database, and that information was provided by the other departments of ENGINECOMP. Moreover, a significant part of the Finance Department’s work was the production of regular reports and other more timely, but irregular information for the other departments. The other departments of ENGINECOMP exerted the most influence on the Finance Department and served as a buffer between the Finance Department and the company’s clients and suppliers. However, the Finance Department did have some interaction with banks and other similar entities outside the company. Thus, as can be expected, the Finance Department’s workload was very stable, with almost no surprises. The Finance Department employees made heavy use of spreadsheets to produce the reports the department was required to produce for the other departments. The reasons offered for use of spreadsheets instead of MaPPAR included: The [MaPPAR] system reports are not adequate for our management needs. They present too much data, in a lot of rows, many of which are not useful. It is better to search for the relevant data using query tools and then to export the data to our spreadsheets where we can treat the data in the way that suits our objectives. (Fernando) Despite that the use of spreadsheets was sanctioned for the production of only specific reports, the spreadsheets were being programmed to also perform some functions that the system provided. There is no need to use the options provided in the menus of the [MaPPAR] system because we have other tools that do the same. (Another Employee of the Finance Department) Some Finance Department employees regretted the lack of training to use MaPPAR. At the time MaPPAR was installed, MaPPAR’s implementers did train the Finance Department employees that were hired specifically to operate MaPPAR. However, these employees saw the training as too brief and too compressed; they simply could not absorb all the complex information about MaPPAR’s functionality in such a short time. Hence, from the beginning, even the MaPPAR operating employees did not understand MaPPAR’s full capabilities. It was a one-shot training. I would not even call it training; it was more like a conversation with the technicians. (Still Another Employee of the Finance Department) After that initial training, employees that had used MaPPAR for a longer time were responsible for in-house MaPPAR training. Often these trainer employees guided the trainee employees through only the functionality actually being used. The trainers usually advised trainees to avoid other MaPPAR functionality, saying that it would be too complex, inadequate, or better implemented in the programmed spreadsheet. All that I know about the system is the result of my efforts to use it. But we have too many tasks to perform. There is no time left to explore the system, which has too many Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
210 Ramos & Berry
complexities. And I say to my colleagues that they should not waste too much time experimenting if they are not sure of the usefulness of a menu’s options. (Fernando) The Finance Department employees saw this in-house training as a burden. Moreover, Carlos did not encourage his employees to experiment with MaPPAR themselves. The Finance Department’s history included some catastrophic mistakes, and the department rumor mill still propagated stories about the stupid mistake that Employee A or Employee B had made. To many Finance Department employees, the spreadsheet appeared much easier and less constraining than did MaPPAR. All requests for required new or enhanced features for MaPPAR were sent to the IS Department for implementation. However, many times the IS Department’s response was to deny a request because of the high costs of the new or enhanced feature. The IS Department did not want to have to make the same changes to every new version of MaPPAR that would come out in the future. When a required feature was refused by Pedro, the director of the IS Department, the requesters were able to easily find implementers among their colleagues, who had accepted the informal responsibility of programming the spreadsheets and databases. In general, Finance Department employees acknowledged the importance of MaPPAR for their department, since MaPPAR’s integrated modules automatically fed all the accounts. The institutional authority of the IS Department to define MaPPAR’s IT support was well accepted.
Logistics Department
The work of the Logistics Department was distributed among three divisions: Purchasing, Customer Service, and Production Planning. António, the manager of the Purchasing Division, Roberto, the manager of the Customer Service Division, and Eduardo, the manager of the Production Planning Division, each reported directly to Manuel, the director of the Logistics Department. Each of the three managers coordinated and controlled the tasks for which he was responsible. The three managers and one director participated jointly in defining the Logistics Department’s goals and objectives and in decision making. Professional training was being delivered to Logistics Department employees on a regular basis. However, each of the three managers believed that his skills were not being adequately used and rewarded. In particular, the Customer Service Division manager, Roberto, felt especially frustrated with this situation. Roberto was seen as a very dynamic man who learned his job quickly and tried to exceed what was expected from him. However, Roberto felt that MaPPAR was a significant barrier to his own career advancement. MaPPAR demanded too much specialization in specific tasks and made it difficult for one person to have a broad perspective of the business processes. Roberto believed that this broad perspective was central to providing effective customer service and to making good decisions. Here [at ENGINECOMP], we are evaluated by our ability to solve problems. To be able to solve problems quickly, we need to know what is going on around here. But, the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Social Construction of IT Supporting Work 211
system is so unnecessarily complex, and it limits our ability to access relevant information. (Roberto) In fact, it was Roberto who took the initiative of programming the spreadsheets and databases that were being used by his colleagues throughout ENGINECOMP. He and two other employees in the Customer Service Division became the informal IS team of ENGINECOMP, always ready to implement new features adapted to the specific needs of their colleagues. Roberto’s informal, can-do attitude, his enthusiasm in searching for solutions, and his courage to directly face senior managers provided him with the admiration of his colleagues, inside and outside the Logistics Department. Roberto proudly said: At this stage of my career, I am independent of the director of the department. I report directly to the administration [i.e., Fritz]. There was a widespread belief among all ENGINECOMP’s employees that the Logistics Department created the company’s positive image among its customers and the general public. This belief was a source of negotiating power for the Logistics Department and especially for its emerging leader, Roberto. Especially during the last three months, we ... When I joined the department, we contacted one client directly, and the others were contacted through our office in Hamburg. In fact, it was not a real department but an expedition unit. With the recent closing of the office in Hamburg, all the logistics service is provided by Portugal. Of course, this gave us much more importance inside the company. (Roberto) Roberto was the manager of one of the three divisions of the Logistics Department officially directed by Manuel. Manuel and the managers of the three divisions believed that the four of them should cooperate to achieve the department’s goals. However, frequent internal conflicts emerged from the fights over the specific interests of the department’s divisions. These conflicts emerged from the need to fight for scarce resources such as budget, human resources, and technology. The conflicts also arose from the conflicting influences on work practices exerted by António, the Purchasing Division manager, and Roberto. We’re working differently now. We have three autonomous divisions. We [managers of the divisions] are responsible for ensuring that our colleagues have what they need to perform their tasks. Hmmm ... Who coordinates our activities? Well, that is a difficult question. The department’s director, I guess ... [laughter]. Well, I talk directly with the administration [Fritz]. (Roberto) The Logistics Department office space was designed as an open space into which any ENGINECOMP employee could enter to seek a problem solution or to demand a service. The Purchasing Division and the Production Planning Division managers often complained about deleterious effects and the pressure caused by the constant interruptions that the open space invited: Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
212 Ramos & Berry
There is always pressure from the exterior of the department. This is a drawback of the client-supplier policy that was adopted internally, this pressure to be continuously available in open space. We are constantly being interrupted! It is not possible to focus our attention on a single subject for long and to follow logical and correct reasoning. (António) Adding to this pressure were the variable and often unexpected requests and problems posed by suppliers, clients, and ENGINECOMP’s production plant. For example, as mentioned, the quantities ordered by a costumer could be changed within an agreed upon percentage after the order was placed and production had started. Logistics Department employees saw flexibility of work practices and procedures, autonomy of decision, and informal communication channels and work relations as key factors to reduce the negative impacts of these sudden changes. Moreover, they saw MaPPAR as reducing their flexibility of action and making their work less interesting. Furthermore, MaPPAR was forcing them to comply with rules and procedures that reduced their ability to fulfill the needs of ENGINECOMP’s plant and customers. Some of these Logistics Department employees referred to MaPPAR as a “sacred cow” that could not be questioned and that made them “slaves”, requiring most of their time to input data without “receiving anything in return”. These employees complained also of the poor quality of the MaPPAR reports. They had become highly suspicious of the data stored in the central database. Because people won’t be wasting their time to explore this and that. Why should they? When I want a report and this [MaPPAR] does not give me what I want ...! If only they [IS Department] made changes to the system! Meanwhile, someone [from the IS Department] decides not to make them. It is frustrating. (Eduardo) As in the Finance Department, the reports needed to support Logistics Department decisions were created in spreadsheets, using data obtained by querying the central MaPPAR database. This practice fostered a disconnection between the inputting of data and the production of reports to support decision making. The Logistics Department employees saw MaPPAR as too complex and too general to effectively support the details of the department’s activities. Logistics Department employees agreed with Finance Department employees about the lack of MaPPAR use training and the burden caused by having to train new employees in the use of MaPPAR. I find the system too confusing. It is a tool, since it is a standard in a lot of business areas. I think it does not really help the specific tasks of a specific company in a small country like ours. (António) Also, the Logistics Department employees decried the IS Department’s lack of support and understanding. In response to this complaint, the Logistics Department developed what Roberto believed was a very effective strategy: I have been doing [developing any needed functionality] by myself. When they [the IS Department] do not want to make the [required] changes to the system, I develop the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Social Construction of IT Supporting Work 213
feature using the spreadsheets and databases. Nowadays, I do not even ask; I do by myself and help others with the programming. (Roberto) And he added: The lack of support from the IS Department is creating disinterest in the system. (Roberto) The removal of MaPPAR was seen as an important step to gain more control over the performed work. Employees wanted to be involved in (1) defining the work practices, (2) decision making, (3) monitoring the system usage, and (4) defining what were good and bad usages. This plant is six years old now. We have learned a lot. We want to be heard! We need a new system, but this time, we want to be involved! (Roberto)
The IS Department
As mentioned, Pedro was the director of the IS Department. He kept close at hand the responsibility to define the operational and management best practices and to ensure that the information systems were effectively supporting those practices. Also mentioned, Pedro was very proud of his professional advancement and his contributions to the success of ENGINECOMP. He considered himself responsible for the formulation of creative solutions in both work practices and information systems. He considered that implementing these solutions was a task to be performed in collaboration with the affected departments. Pedro did not consider himself to be an IS technical expert. His closest collaborator was the person that knew the used systems in detail and would do all the programming, parameterization, and users’ support. This collaborator, José, was a timid man with strong technical skills; he was very competent, but was hardly considered a leader of opinion and action. He [José] is very competent. I trust all technical tasks to him. (Pedro) José works hard. He always treats us right. The only thing is ... Well, he is very silent [smiling]. (Roberto)
José was well accepted by employees from the other departments. These other employees understood the difficulty José had in providing immediate support to all MaPPAR users. José’s colleagues from the other departments found José easier to approach than Pedro and they preferred to talk with José first when a problem arose or when a change to MaPPAR was needed. However, these employees knew also that José’s actions were constrained by IS Department policies and his manager Pedro. At one point, the managers of departments other than the IS Department were informally discussing the role of the IS Department. A clear line was emerging between those that supported Pedro’s interventional attitude and those that supported José’s helpful and cooperative attitude. Pedro was aware of this discussion but showed no resentment towards José.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
214 Ramos & Berry
I know that they [the other employees] would like to get rid of me [laughter]. I defend the company interests too much. Of course, some other people around here understand that I have dedicated my life to this company. This happens everywhere and is tough. I do not believe José could handle all this. (Pedro) Pedro saw José as a very good programmer that would never be able to carry out the negotiation and political battles inherent in Pedro’s, the IS director’s, job. Moreover, Pedro knew that José was oblivious to the discussion. Pedro understood this discussion as an expression of the growing resistance to his own actions. What Pedro really resented was that the German administration and Fritz, in particular, were listening to these dissenting voices more and more. The Brazilian administration understood. They hired me for my competence, and they saw what I did. The new administrator does not know. (Pedro)
CHALLENGES FACING THE ORGANIZATION
By the end of the Isabel’s study, the company was undergoing important changes in its middle management structure. Carlos resigned when a new directorial opportunity was offered to him back home in Brazil. Roberto was becoming a key player in the growing importance of the Logistics Department within ENGINECOMP and in the good image this department projected to the company’s stakeholders. Manuel, the Logistics Department director, was losing control over departmental decisions and strategy definition. The German parent company was using a different ERP system, SAP R/3, and was now weighing the cost-benefits tradeoffs between upgrading ENGINECOMP’s MaPPAR to interface with the parent company’s system and deploying SAP R/3 at ENGINECOMP. Roberto believed that SAP R/3 should be deployed at ENGINECOMP. Roberto believed also that ENGINECOMP’s six-year use of some ERP system, albeit different from SAP R/3, gave ENGINECOMP’s middle managers the technological knowledge to actively participate in the parameterization of SAP R/3. The Finance Department did not have any preference, and its director was concerned only with the costs of the new system. Pedro was against the deployment of SAP/R3 because of the high costs of process reengineering and user training. Some were suspicious of the data stored in the central database. The reluctance to use MaPPAR and the lack of time and will to explore MaPPAR’s full functionality resulted in data being entered late or not at all, with the usual deleterious effects. For example, it was not always possible to track a client’s order when the client requested information about his or her order. There were also problems identifying what products and raw material were in stock. The growing trend to use spreadsheet programming to get around the problems caused by MaPPAR or to implement functionality perceived as not available in MaPPAR was also reducing the data’s quality. Often, results of outside data processing were not fed back into MaPPAR.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Social Construction of IT Supporting Work 215
Important MaPPAR functions, such as requirements planning and capacity planning, were never utilized and were instead programmed outside MaPPAR, resulting in unnecessary maintenance costs, lack of control over planning and its results, and a severe risk of unreliable planning. Pedro’s efforts to document (1) organizational processes and resources, (2) the decision to deploy MaPPAR, (3) the MaPPAR process, (4) MaPPAR’s functionalities and upgrades, and (5) the IT structure of ENGINECOMP supported his conviction that there was no need to incur the high costs of switching to a different system, especially when the European economy, including its automotive sector, was slowing down. Pedro realized that prevailing with this view would require gaining allies within ENGINECOMP and gaining the explicit support of Fritz, a near impossibility. Pedro just did not know what else could be done to get people to see the importance of abandoning the small databases and in-house programming in favor of a fuller understanding and use of MaPPAR.
REFERENCES
Bennett, J., Stepina, L. P., & Boyle, R. J. (2003). Turnover of information technology workers: Examining empirically the influence of attitudes, job characteristics and external markets. Journal of Management Information Systems, 19(3), 231-250. Bergman, M. B., King, J. L., & Lyytinen, K. (2002). Large-scale requirements analysis revisited: The need for understanding the political ecology of requirements engineering. Requirements Engineering Journal, 7(3), 152-171. Davison, R. (2002). Cultural complications of ERP: Valuable lessons learned from implementation experiences in parts of the world with different cultural heritages. Communications of the ACM, 45(7), 107-111. Davison, R., & Martinsons, M. G. (2002). Empowerment or enslavement? A case of processbased organisational change in Hong Kong. Information Technology & People, 15(1), 42-59. Dhillon, G. (2004). Dimensions of power and IS implementation. Information & Management, 41(5), 635-644. Goguen, J. A. (1994). Requirements engineering as the reconciliation of technical and social issues. In J. A. Goguen, & M. Jirotka (Eds.), Requirements engineering: Social and technical issues (pp. 165-199). London: Academic Press. Krumbholz, M., Galliers, J., Coulianos, N., & Maiden, N. A. M. (2000). Implementing enterprise resource planning packages in different corporate and national cultures. Journal of Information Technology, 15, 267-279. Krumbholz, M., & Maiden, N. A. M. (2001). The implementing of ERP packages in different organisational and national cultures. Information Systems Journal, 26(3), 185-204. Lyytinen, K., Mathiassen, L., & Ropponen, J. (1998). Attention shaping and software risk: A categorical analysis of four classical risk management approaches. Information Systems Research, 9(3), 233-255 Norman, D. A. (2002). Emotion and design: Attractive things work better. Interactions, 9(4), 36-42. Parker, S., & Wall, T. (1998). Job and work design: Organizing work to promote well-being and effectiveness. Thousand Oaks, CA: Sage Publications.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
216 Ramos & Berry
Piccoli, G., & Ives, B. (2003). Trust and the unintended effects of behavior control in virtual teams. MIS Quarterly, 27(3), 365-395. Ramos, I. M. P. (2000). Aplicações das Tecnologias de Informação que suportam as dimensões estrutural, social, política e simbólica do trabalho. Doctoral dissertation, Departamento de Informática, Universidade do Minho, Guimarães, Portugal. Snizek, W. E. (1995). Virtual offices: Some neglected considerations. Communications of the ACM, 38, 15-17 Thatcher, J. B., & Perrewé, P. L. (2002). An empirical examination of individual traits as antecedents to computer anxiety and computer self-efficacy. MIS Quarterly, 26(4), 381-396.
Appendix I Figure 1. Organizational chart until 1999 1993-1998 Brazilian Leader: Rafael
Administration
IS Department
Logistics Department
Finance Department
Director: Pedro
Director: Carlos
Closest Collaborator: José
Closest Collaborator: Fernando
4 other employees
8 other employees
1998-... German Leader: Fritz
Other Departments
Director: Manuel
Customer Service Division
Production Planning Division
Purchasing Division
Manager: Roberto
Manager: Eduardo
Manager: Antonio
4 other employees
1 other employee
2 other employees
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Social Construction of IT Supporting Work 217
Figure 2. Organizational chart after 1999
Administration
IS Department
Director: José
Finance Department
Fritz
Logistics Department
Director: Klaus
Other Departments
Director: Roberto
Closest Collaborator: Fernando 4 other employees
8 other employees
Customer Service Division
Production Planning Division
Puchasing Division
Manager: Roberto
Manager: Eduardo
Manager: António
4 other employees
1 other employee
2 other employees
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
218 Ramos & Berry
Appendix II. The Automotive Sector in Portugal
The automotive industry in Portugal generates, per year, more than 6.6 billion Euros, of which 4.1 billion are in automobile components. It currently employs more than 45,000 workers. Investment in the automotive component industry continues to attract a large number of investors and is strongly supported by both Portuguese Government and European Union funds. The main areas of automotive production in Portugal include electronics, die castings, plastic parts, seats, and climate control systems. Manufacturers, including Volkswagen, Mitsubishi, Opel, Toyota, and Citroen, assemble more than 240,000 cars per year in Portugal. Portugal and Spain together make up the third largest car producing region in Europe. More than 80% of the vehicles produced in Portugal are exported to other European countries. Portugal’s automotive component industry, comprising 160 companies, focuses on engines, engine components, moulds, tools, and other small parts. Number of Companies Directly employed staff Turnover (billion Euro) Exports (billion Euro)
160 37 500 4 112 2 642
Source: AFIA (2002) (http://www.afia-afia.pt/) Components Industry Evolution Years
National Market
Exports
19 86
20 0
22 4
Turnover 42 4
19 90
32 9
79 8
1. 127
19 94
43 4
1. 786
2. 220
19 98
1. 352
2. 319
3. 671
20 03
1. 460
2. 834
4. 294
Source: AFIA (2002) (http://www.afia-afia.pt/)
ENGINECOMP: Organizational Units, Business Vision & Mission, International Norms Adopted
The company’s headquarters are in Brazil. The company has plants in Brazil, Portugal, and Argentina. It has commercial offices in Germany, the United States, Uruguay, and Ireland.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Social Construction of IT Supporting Work 219
ENGINECOMP Vision: To be acknowledged worldwide as a competitive, high technology manufacturer that respects the environment. Mission: To be the principal producer of the XYZ car engine component for the European market, aiming at complete client satisfaction; to achieve a high return on its invested capital. International Norms Adopted: QS 9000/ ISO 9001, VDA 6.1, BS 7750 Number of Employees Turnover (Million Euro per year) Exports (Million Euro per year)
800 30 20
Isabel Ramos earned her PhD in information technologies and systems from the University of Minho (2001). She is currently an assistant professor in the Information Systems Department, University of Minho, Portugal. She leads a research group in knowledge management. She also conducts research in requirements engineering. She is an associate editor of the new International Journal of Technology and Human Interaction. Other research and teaching interests include organizational theory, sociology of knowledge, history of science, and research methodology. She has authored several scientific papers presented at international conferences and workshops, and published in conference and workshop proceedings and journals. Daniel M. Berry earned his PhD in computer science from Brown University (1974). He was on the Faculty of the Computer Science Department at the University of California, Los Angeles (1972- 1987). He was with the Computer Science Faculty at the Technion, Israel (1987-1999). From 1990 to 1994, he worked for half of each year at the Software Engineering Institute at Carnegie Mellon University (USA), where he was part of a group that built CMU’s Master of Software Engineering program. During the 19981999 academic year, he visited the Computer Systems Group at the University of Waterloo in Waterloo, Ontario, Canada. In 1999, Dr. Berry moved to the School of Computer Science at the University of Waterloo. His current research interests are software engineering in general, and requirements engineering and electronic publishing. He has supervised 21 PhDs, numerous master’s students, and has received a Noted Instructor Award in Computer Science at Technion. He has consulted extensively in industry. He has served as associate editor for two journals and has been a referee for numerous journals. He has chaired and participated on programming committees for many conferences and workshops. He has published extensively in refereed journals and contributed to many refereed conferences, symposia, and books. This case was previously published in the Journal of Cases on Information Technology, 7(3), pp. 117, © 2005. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
220 Law & Perez
Chapter XIV
Cross-Cultural Implementation of Information System Wai K. Law University of Guam, Guam Karri Perez University of Guam, Guam
EXECUTIVE SUMMARY
GHI, an international service conglomerate, recently acquired a new subsidiary in an Asian country. A new information system was planned to facilitate the re-branding of the subsidiary. The project was outsourced to an application service provider through a consultant. A functional manager from another subsidiary in the country was assigned to assist the development of specifications. The customized information passed numerous benchmarking tests, and was ready for implementation. At that point, it was discovered that the native users at the rural location of the new subsidiary could not comprehend any of the user interfaces programmed in the English language. A depressed local management team, with a depleted technology budget, must reinvent all operating procedures dependent on the new information system.
ORGANIZATIONAL BACKGROUND
GHI was an international service conglomerate operating globally in more than 500 locations with established clienteles in high-value-added markets. As a member of a highly competitive industry, the business found success in carefully nurtured brand images of quality and consistent services. GHI has been a strong leader in its industry and earned a respectable ranking among the Fortune 1,000 companies. In recent years, GHI has grown its annual revenue substantially beyond the billion-dollar mark. As part Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cross-Cultural Implementation of Information System
221
of an ongoing strategy, GHI aggressively sought opportunities to improve performance of undervalued business operations, through the introduction of better and more efficient management techniques and practices, including the injection of capital through ownership and contracts. GHI valued its workforce of more than 100,000 as a key asset in retaining loyalty from corporate clients as well as individual customers. The corporate values of GHI emphasized a passion for excellence, promoting cultural diversity; encouraged innovation and changes; and fully committed to serve communities where GHI maintained business interests. Recent GHI business strategy included a goal to improve operating efficiency through the increased use of technology. A small corporate IT team coordinated numerous projects to deploy technology throughout its global locations. GHI relied heavily on technology providers outside of the company to construct information systems (Best, 1997), with developmental projects to integrate systems from the various vendors. A challenge was to centrally control the flow of information through uniform technology platform, a practice with significant benefits, as GHI attempted to provide consistent services to its customers who rapidly entered global market locations. GHI insisted on the implementation of corporate systems to collect and analyze transaction data and customer demographics. This centralized reporting requirement allowed GHI to detect trends, allowing the corporate office to adjust its global strategies according to dynamic economic conditions, and identifying potential problems that required corporate attention. For example, capital budget could be planned for common needs for capital improvement. At the same time, the company realized cost reduction through purchasing economies in business services, supplies, capital goods, telecommunication services, and technology. A senior IT official emphasized the benefits of removing technology decisions from the local level, eliminating the duplication of efforts in acquiring and deploying packaged software. Standardization also reduced the challenges of data communication through an assortment of technology platforms (Aggarwal, 1993). GHI also pursued a strategic goal to expand e-business applications to improve sales and services. In 2002, a Web-based Customer Relationship Management (CRM) system was deployed to formalize the delivery of customer services. In the past, individual locations devised their own system to manage customer service problems, often as scribbled notes informally trickled through the corporate communication channels. The new system provided a tool to spot problem areas, and provided a way to expedite problem resolution while customers were being served. The managers gained real-time visibility on service activities, while the corporation could use the tool to promote and enforce uniform customer satisfaction levels across its many locations. The system was also instrumental in employee training, and for empowering front-line service personnel to mitigate service deficiencies. CRM enabled GHI to provide fast, convenient, dependable, and consistent service to its customers. The systems were tested in locations in North America with satisfactory results. CRM was also employed to enhance sales and marketing (O’Brien, 2002). GHI anticipated growth in demand for its quality services, especially from its loyal customers. The integration of its regional propriety customer databases represented an effort to enhance the consistent delivery of services to its customers throughout the world. Centralized established information standards facilitated the sharing of data, and promoting uniform practices of data analysis and data-supported decision making. The Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
222 Law & Perez
new information systems provided a personal touch, targeting the needs of the customers at the right time, with the right offer. At the same time, GHI hoped to increase sales per customer through selling additional products and services to existing customers, improving revenue, and creating additional marketing opportunities. The elaborate data analysis and environmental scanning identified new business opportunities and helped to maximize the utilization of the highly perishable service capacities. The information system was also a valuable tool for inducing desirable behavioral change, especially in newly established subsidiaries affected by organizational changes (Parker, 1996). GHI anticipated the need for the powerful global information network to maintain corporate standards in their expanding network of service sites (KroacKakabadse & Kouzmin, 1999). The disciplined use of data in decision making also opened opportunities for GHI to market its management services.
SETTING THE STAGE
The weakened North American and European economies, combined with political and economic instabilities in South America, prompted strategic adjustment with increasing attention to emerging markets. The recent economic turmoil in the Asia Pacific region presented numerous attractive investment opportunities. Businesses severely affected by the economic downturn began to seek external assistance to improve business value. In 2001, GHI acquired interest in a major business in an Asian country as its subsidiary. There were over 1,500 workers employed by the organization at the time of the acquisition. The new subsidiary had historically focused on the domestic market, with established clienteles. The subsidiary has been highly regarded by business analysts and the local community. The GHI management planned to convert the new acquisition into one of its leading subsidiaries in Asia. As a gesture of cultural sensitivity, the local management team and the entire workforce were retained in employment as part of the acquisition agreement to smooth the transition. A transitional management team was sent, including trainers experienced in cultural diversity, differences in language, customs, rituals, and ceremonies (Bjerke, 1999). The GHI Asian regional office was tasked with re-branding the new subsidiary to attract a more diversified customer base. The regional management was confident of the assignment, with successful experience operating other subsidiaries in the country. A one-year transitional period was planned for the re-branding process, which included extensive training of the workforce and indoctrination of the new corporate values. A well-designed information system could assist the reshaping of work behavior of the workforce (Davenport, 1997), and the new location must comply with the corporate expectation of central data reporting, uniform software platform, and sharing data with other locations. An information system was currently in use at the new subsidiary. The information system was developed locally through an in-house IS staff with contracted services from local technical vendors. The local information system was providing adequate support for client and human resource information, and for retail transaction supports. The local system lacked features for customer service supports, central reporting, and data sharing, but it handled transactions for a substantial retail business, which was a new business experience for the corporation. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cross-Cultural Implementation of Information System
223
The management must decide between developing the information system locally and bringing in a brand new information system. In consideration of a tight budget, a short conversion deadline, and the central preference for standardized system, the regional management decided to implement a new turn-key system, and leave the existing system in place until the delivery of the new information system. The logical choice was to bring in a version of the information system already implemented in the other locations (Gwynne, 2001). However, the simple importation of technical solutions designed under different cultural contexts could be a fruitless effort (Al-Abdul-Gader, 1999). Precautions were made in hiring a consultant to monitor the project, and assigning an experienced user of the corporate system to serve as the user liaison officer (Oz, 1994).
CASE DESCRIPTION
With successful information systems at other subsidiaries in the country, the regional management expected a routine system development process, especially with opportunities to learn from the systems usage experience of many subsidiaries in other Asian cities. An IT consultant was hired to manage the system development project and bid out the project to a contractor. The fact that the consultant had recently completed an information system project for GHI could add weight to the selection process. A functional manager from a major subsidiary in the same country was relocated to the project site to serve as the user liaison, allowed benchmarking against a successfully implemented information system (Parker, 1996). The transferred employee was supposed to contribute first-hand experience as a user and trainer on a similar information system successfully implemented in a major international city. A 12-month target was set for the completion of the new information system. Evidently, the local management and IS staff were not consulted concerning the new information system (Sahraoui, 2003). It was an acceptable practice since the local culture discouraged the questioning of decisions by superior officers (Gannon, 2001). Although the local technical vendors were contacted, the consultant selected an application service provider (ASP) from another country. The consultant did not realize that the failure to involve the local technical vendors was a cultural blunder that would greatly diminish the chance of gaining their assistance at a later stage (Gannon, 2001). It was agreed that as a condition for a low bid on the project, the ASP would be allowed to develop the new system off-site and relieved of the responsibility of local language needs, including a user manual in the local language. The consultant promised to handle the translation work for the user manual. The consultant was to provide system specifications, including user interface requirements. The consultant followed specifications of similar corporate systems implemented in other global locations, and relied on the user liaison for specifications on appropriate local adjustments. It turned out that both the consultant and the ASP had limited experience working in the Asian country. The consultant stayed at the project site, but seemed to interact mainly with the user liaison. Since very few of the employees at the new subsidiary understood English, the consultant relied on the user liaison to review prototypes throughout the various stages of the system development process. Beside the language and cultural barriers, the tight project schedule gave the consultant more reason to limit personal contact with the locals. As it turned out, the local people had a long tradition Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
224 Law & Perez
of demanding the establishment of a good personal relationship prior to a cooperative working relationship. Figure 1 illustrates the special arrangement under which the ASP was allowed to work exclusively with the consultant and the user liaison, due to the cultural and language barrier. The ASP was from a nation less recognized for technology achievement. It would have been difficult for the locals to accept the technical merits of the ASP, and the language barrier would have created immense problems during system analysis. The ASP
Figure 1. Special arrangement for system development at the new subsidiary
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cross-Cultural Implementation of Information System
225
was relieved to be able to omit on-site interviews with the local end-users. As a result, the ASP never visited the new subsidiary until the delivery of the new information system. The savings in travel expenses were also substantial due to the rural settings of the project site. Project cost was the primary concern for GHI, and the people and the location of programming the new information system were not considered important. The coding of the information system proceeded smoothly and on time. As the information system delivery date approached, the ASP contacted the management at the subsidiary to schedule a training workshop for the employees of the organization. The service contract stipulated that the ASP was to provide a system manual, and to provide initial training for system usage at the time of system delivery. It was at that point that the ASP realized that the local end-users could not understand the user interface programmed in English, the international language (Wallraff, 2000) chosen for the new information system. To make it worse, no one at the subsidiary could comprehend the system manual written in English (Regan & O’Connor, 1994). The ASP was greatly puzzled since the user liaison approved the new system. When confronted, the user liaison explained to the ASP, in fluent English, that English was the primary language used at his subsidiary since a majority of their customers at the branch spoke English. As a result, their employees must possess strong language skill in English. He assumed that English would also be the primary language after the re-branding process by GHI, which was well established in the western world. He thought his temporary assignment to the project was to facilitate the implementation of a similar information system. The newly coded system would be working fine at the subsidiary he came from. The consultant was surprised too, since English was the corporate language and the standard language used in the corporate information system. The consultant had relied solely on the input of the user liaison to finalize specifications, and did not bother to verify local user environments. He knew that the corporation was bringing another team to train the local employees, and assumed language training would be part of the rebranding process. He saw his role as certifying the quality of services from the ASP, and the user liaison was supposed to represent the needs of the user (Oz, 1994). To the best knowledge of the consultant, English has been the standard language used in the information systems of GHI. There was no indication that GHI would deviate from the western management protocols in this new subsidiary (Elin-Dor, Segev, & Orgad, 1993). The language and cultural issues they tried very hard to avoid finally surfaced as a huge issue (Bischoff & Alexander, 1997; Gannon, 2001). The ASP reminded the consultant of the promise to obtain a local translation of the user manual. The consultant turned to the local training manager for assistance with two copies of the English manual. The consultant was shocked when the request to translate the system manual into the native language was declined. The consultant was told that it was impossible to find anyone locally to translate the highly technical terms in the manual, especially on short notice. The local technical vendors were not interested in the translation work, possibility due to feelings of humiliation for being excluded from the system development. Other local language translators claimed lacking knowledge of the highly specialized technical terms in the manual. The effort to construct a user-interface in the local language was also unsuccessful. The consultant was told that the new information system could not support the native language and culture (Kersten, Kersten, & Rakowski, 2002). Similarly, the request to add native translation to the user screen was rejected. Some claimed that a new system in the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
226 Law & Perez
native language would be needed. Other reasons included the conflicting layout formats between the native language and English, data representation and storage problems, translation problems for data in the native language for sharing with other locations, and service support information linked from other locations. There was little hope of soliciting local support to salvage the new information system. Unwilling to breach the contractual term, the ASP went ahead with the system delivery and presentation of the new information system, in English. During the initial system training session, a native-speaking audience sat silently throughout the presentation of the new information system. No one could comprehend any of the PowerPoint slides prepared in English. The audience felt humiliated when they found out that the ASP was from a nation lesser known for technical achievement, while they had developed a fully functional information system locally. They were further humiliated when the ASP failed to demonstrate any knowledge of local customs and the role of the information system in providing services to the customers. Anger flared and panic set in (Gallois & Callan, 1997). The local management was upset at the hastily delivered training in a foreign language. The failure to seek consensus to resolve this difficult problem was unacceptable in the local culture. Handicapped by the language barrier, the ASP could not explain the internal coding of the system. There was little assistance that the consultant, the user liaison, or the in-house computer group could offer without immediate knowledge of the computer programs. The consultant was from a small nation, and failing to build relationship with the locals, contributed little to earn the trust of the locals for cooperation (Gannon, 2001). The consultant was not familiar with information processing in the native language. The user liaison would soon return to his prior post. The local IS staff had little experience with the corporate information system in the English language. The local management recognized that there was no money to fix the problem. The ASP insisted that a functional information system had been delivered according to the contractual terms. After spending close to a million dollars on the new information system, the local management and IS staff suddenly realized that they had inherited a customized information system that was completely inappropriate for the user environment. There was little hope that the ASP would provide any further assistance to make the information system usable. Most of the employees at the subsidiary had limited experience with the western culture, and it was doubtful that even a small fraction of them would ever learn English as a second language. On the other hand, it would be extremely difficult to replace the current workforce with English-speaking workers from outside the rural area. Most importantly, even the corporate management intended to internationalize the operation of the subsidiary, since a majority of the customers would be natives of countries with poor command of the English language. To make matters worst, the corporate office expected customer and operation data from the local management through the new information system, and the usage of the old system was disallowed with the delivery of the new information system, especially with the many new operational practices. Many locals wondered the wisdom of hiring, from a remote land, an ASP that could not even “explain” how the system worked, when there were capable nativespeaking contractors who must now be recruited to fix the information system!
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cross-Cultural Implementation of Information System
227
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
Depressed, the local management realized that it would be difficult to request additional budget to correct the information system problem. The different layouts and orientations of the native language comparing to the English language made it difficult to perform simple language conversion of the user interfaces. Neither the ASP nor the IT consultant had the required language skills to construct a user interface in the native language. Besides, the ASP was not interested in building an information system or system components in the native language. The local technical vendors that created the previous information system were not eager to work on a user interface for the new system. The local technical staffs, although keeping their jobs in the subsidiary, had little to contribute to the project. The user liaison had returned to his prior position after the delivery of the ill-fated information system. The local management was left to make the best use of the new information system. The immediate need was to devise methods for the workers to use the information system on a daily basis. Since all sale transactions must be processed through the new information system, workers were provided with cue cards for the interface screens, and corresponding translation in the native language. The process was slow and tedious. It was more challenging when handling customer payments. It was culturally desirable to show a customer an itemized bill with full explanation of all charges. This way, the customer could verify or question charges before making payment. The new information system printed all the invoices in English, which was not comprehensible by most workers and customers. Workers had to use dictionaries and politely explain all charges to each customer in the native language. The time-consuming activity tested the patience of everyone! The management did not return to using the old information system despite all the difficulties with the new system. In the meantime, GHI announced additional Phase II business expansions that were to be completed in the next 12 months. The new information system was expected to support these new business developments. One year later, many local managers had departed from the subsidiary for other career opportunities. These were unusual practices in the local culture in which long-term loyalty to an organization would be expected and rewarded. Although few would comment on their career changes, most of these people had expressed frustration towards the thoughtless and unconcerned way of the corporation, since the delivery of the new information system (Kroac-Kakabadse & Kouzmin, 1999). The cultural conflicts triggered by the handling of the new information system continued to manifest and erode the organizational cohesiveness. The local management, realizing the damaged relationships with the local technical vendors, was able to acquire a native language user interface, mainly for supporting the retail transactions. The new user interface utilized Web-based technology, including the use of ASP.NET. A native IT solution provider from another city was hired to provide periodic technical support for the user interface. With the new user interface, the new information system seemed to be providing adequate support for the retail operations. Despite the intent design of the new system to support data sharing with other locations, the new system remained disconnected from the corporate system. Thus, the information system had fallen short of its initial goals to support central data reporting, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
228 Law & Perez
uniform software platform, and data sharing for supporting sales and service developments. This should not be a surprise to the management of GHI since it has been estimated that only about 10% of the businesses implementing Web content management systems have successfully delivered content to their global users (Voelker, 2004). The local managers would not comment on how they handle customer satisfaction with the new information system, but they acknowledged that they have abandoned the plan to use the new system to support the new Phase II business developments that would be completed in the near future. On the other hand, the corporate office seemed to be satisfied with the implementation of the native language interface, and continued to retain the service of the IT consultant!
REFERENCES
Aggarwal, A. (1993). An end user computing view: Present and future. Journal of End User Computing, 5(4), 29-33. Al-Abdul-Gader, A. H. (1999). Managing computer based information systems in developing countries: A cultural perspective. Hershey, PA: Idea Group Publishing. Bjerke, B. (1999). Business leadership and culture: National management styles in the global economy. Cheltenham, UK: Edward Elgar. Best, J. D. (1997). The digital organization. New York: John Wiley & Sons. Bischoff, J., & Alexander, T. (1997). Data warehouse: Practical advice from the expert. Upper Saddle River, NJ: Prentice Hall. Davenport, T. (1997). Information ecology: Mastering the information and knowledge environment. New York: Oxford University Press. Ein-Dor, P., Segev, E., & Orgad, M. (1993). The effect of national culture on IS: Implications for international information systems. Journal of Global Information Management, 1(1), 33-45. Gallois, C., & Callan, V. J. (1997). Communication and culture: A guide for practice. Chichester, UK: John Wiley & Sons. Gannon, M. J. (2001). Understanding global cultures: Metaphorical journeys through 23 nations. Thousand Oaks, CA: Sage Publications. Gwynne, P. (2001). Information systems go global. MIT Sloan Management Review, 42(4), 14. Kersten, G. E., Kersten, M. A., & Rakowski, W. M. (2002). Software and culture: Beyond the internationalization of the interface. Journal of Global Information Management, 10(4). Korac-Kakabadze, N., & Kouzmin, A. (1999). Designing for cultural diversity in an IT and globalizing milieu some real leadership delimmas for the new millennium. The Journal of Management Development, 18(3), 291. O’Brien, J. A. (2002). Management information systems: Managing information technology in the e-business enterprise. New York: McGraw-Hill.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Cross-Cultural Implementation of Information System
229
Oz, E. (1994). When professional standards are lax: The CONFIRM failure and its lesson. Communications of the ACM, 37(10), 29-37. Parker, M. (1996). Strategic transformation and information technology: Paradigms for performing while transforming. NJ: Prentice Hall. Regan, E. A., & O’Connor, B. N. (1994). End-user information systems: Perspectives for managers and information systems professionals. New York: Macmillan Publishing. Sahraoui, S. (2003). Editorial preface: Towards an alternative approach to IT planning in nonwestern environments. Journal of Global Information Technology, 6(1), 1-7. Voelker, M. (2004). Web content management: Think globally, act locally. Transform Magazine, 13(4), 16-21. Wallraff, B. (2000). What global language? The Atlantic Monthly, 286(5), 52-66.
Wai K. Law is associate professor at the School of Business and Public Administration, University of Guam. He received his PhD in strategy & policy and MS in computer science, both from Michigan State University. Dr. Law’s research interests are in the areas of strategic information systems, public sector information management, information resources development, and information technology education. Karri Perez has a PhD in human and organizational development from the Fielding Graduate Institute. Her research interests include human resources management, intercultural communications, and workplace adaptation. Dr. Perez is vice president of human resources at a financial institution, and has also served as an adjunct professor in human resources at the University of Guam and the University of Phoenix.
This case was previously published in the Journal of Cases on Information Technology, 7(2), pp. 121130, © 2005.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
230 Kamel
Chapter XV
DSS for Strategic Decision Making Sherif Kamel American University in Cairo, Egypt
EXECUTIVE SUMMARY
This chapter describes and analyzes the experience of the Egyptian government in spreading the awareness of information technology and its use in managing socioeconomic development through building multiple information handling and decision support systems in messy, turbulent and changing environments. The successes over the past 10 years in developing, implementing and sustaining state-of-the-art decision support systems for central governmental decision making holds many lessons for the implementation of sophisticated systems under conditions of extreme difficulty. The experience offers insight into a variety of problems for designers, implementers, users and researchers of information and decision support systems. The chapter focuses on two main themes; the use of information in development planning and the use of decision support systems theory and applications in public administration. These themes reflect the government attempts to optimize the use of information technology to boost socio-economic development which witnessed the initiation, development and implementation of a supply-push strategy to improve Egypt’s managerial, technological and administrative development. The chapter demonstrates how the nature of decision making at the Cabinet and the information needs related to such a strategic level necessitated the establishment of an information vehicle to respond to the decisionmaking requirements. Finally, the chapter provides some case analysis showing the implementation and institutionalization of large information and decision support systems in Egypt, their use in unconventional settings and their implications on the decision-making process in public administration.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
DSS for Strategic Decision Making 231
BACKGROUND
The importance of information technology has been greatly emphasized in most developing countries (Goodman, 1991; Lind, 1991) where the government has played a vital role in its diffusion (Moussa & Schware, 1992). These governments, through their policies, laws and regulations, still exert the largest influence throughout various organizations and entities (Nidumolu & Goodman, 1993). Recently, the extensive benefits of information collection, analysis and dissemination, supported by computer-based technologies have been sought to enable decision makers and development planners to accelerate their socio-economic development programs. Thus, many developing countries have been embarking on medium and large scale information technology and computerization projects. In practice, most of these projects have sought to introduce computer technologies to realize socio-economic development. However, frequently, it concentrated more on large scale capital expenditures rather than on human capital investment such as training and human resource development (UNESCO, 1989), and therefore, failed to achieve their goals resulting in a generally negative conventional wisdom which defined information technology as inappropriate to developing countries. Consequently, developing countries, gaining from the experiences of the past, have been extensively investing in training, consultancy and the establishment of a strong and efficient technological infrastructure that could move them into a state of self sufficiency and help build an information infrastructure that could help boost their socio-economic development efforts. However, to realize concrete benefits from the implementation of information technology, there was an ultimate need to apply the appropriate technology that do fit the country’s values, social conditions and cultural aspects as well as the identification of information technology needs, and its related policies and regulations that could provide the proper environment for its implementation. Realizing the enormous impact of information technology and its important role in socio-economic development, the government of Egypt has been striving to implement a nation-wide strategy to support the realization of its targeted objectives. Therefore, it adopted since the mid-1980s a supply-push strategy to improve Egypt’s managerial and technological infrastructure. The objective was to introduce and diffuse information technology into all ministries, governorates, and government organizations which necessitated the development of an infrastructure for informatics and decision support, a software service industry and a high-tech industrial base in the areas of electronics, computers and communications. Consequently, the government, late in 1985, established the Cabinet, Information and Decision Support Center (IDSC) to support the Cabinet and top policymakers in key socio-economic issues through the formulation of information and decision support projects reaching 600 in 1996.
SETTING THE STAGE
Decision support systems (DSS) imply the use of computers to assist managers in their decision processes in semi and ill-structured tasks, support rather than replace managerial judgment, and improve the effectiveness of decision making rather than its efficiency (Keen & Scott Morton, 1978). Decision support systems were mainly developed and applied in profit-oriented organizations which are managed through market Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
232 Kamel
constraints and trends. However, IDSC experience suggests new areas of applications for decision support systems which are based on developmental objectives for socioeconomic improvement, governed by country-wide laws and regulations and regarded as systems which ought to fit within developmental contexts, policy decision making and supporting management problem solving. While there are examples of successful decision support systems used for strategic decision making by top management in such decision contexts as mergers and acquisitions, plant location and capital expenditures, these systems tend to focus on limited and well-structured phases of specific decisions. However, when supporting the comprehensive strategic decision-making process over a longer span of time with competing and changing strategic and socio-economic development issues, multiple decisions and changing participants, much less progress has been made. A large part of the challenge comes from the messy, complex nature of the strategic decision-making process and the related issues that it brings to the design, development and implementation of decision support systems. This could be attributed to the nature of strategic decision making which is usually murky, ill-structured and drawn out over a long period of time through requiring rapid response capabilities in crisis situations (El Sherif & El Sawy, 1988). It is usually a group rather than an individual effort involving cooperative problem solving, crisis management, consensus building and conflict resolution (Gray, 1988). It involves multiple stake holders with different assumptions (Mason & Mitroff, 1981). The information used is mostly qualitative, verbal and poorly recorded (El Sherif & El Sawy, 1988) and its HH unlimitedness causes not only an information overload with multiple and conflicting interpretations but also the absence of relevant information (Zmud, 1986). Finally, the formation of strategic decisions is more like an evolving and emerging process where the supporting requirements are difficult to forecast (Mintzberg & Waters, 1985). There are also some challenges that are associated with the nature of the decision maker such as difficulty in contacting due to his valuable time, unwillingness to spend time learning, preference to rely more on personal experience and intuition rather than on information technology tools and techniques, and resistance to changes. In the case of Egypt, strategic decision making at the Cabinet level provides an opportunity for the design and delivery of information and decision support systems that differ from other conventional settings. The inadequate reliability of the information infrastructure coupled with the need for crisis response led to prototyping the design and delivery processes which was based on an issue-based rather than an organizational decision-based approach to fit the decision-making environment. There are many similarities that could be mapped between the Cabinet and organizational decision making where the use of issues management is not alien to corporations (King, 1981) and was applied in the planning for various management information systems organizations (Dansker et al., 1987). Table 1 provides a comparison of the conventional decision-based approach and the issue-based approach to decision support systems as identified by IDSC and which has been successfully implemented during the last decade in response to the need for supporting strategic decision making at the Cabinet level (El Sherif & El Sawy, 1988). The table is useful for information systems researchers and practitioners in determining the advantages and constraints of the issue-based approach to various organizational and decision-making environments. The approach’s life cycle is highly
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
DSS for Strategic Decision Making 233
Table 1. Conventional vs. issue-based decision support systems approach
Conventional
Issue-Based • • • •
issue groups of interacting issues attention focusing agenda setting
Favored Domains • tactical & operational decisions • one-shot decisions • functional applications • departmental applications
• • • •
strategic decisions recurring strategic decisions cross-functional applications trans-organizational applications
Design & Delivery • promotes customization to individual decision maker • interaction between decisions not incorporated • prototyping design • design approach becomes the system
• promotes consensus around group issue • integration and consensus drives process • prototyping design & delivery • delivery approach becomes the system
Focus
EIS Readiness
Emerging Leveraging Technologies
• decision maker • single decision • decision making • alternatives generation
• incorporates tracking • no tracking component component • emphasizes convergent structuring • balances divergent exploration • major transformation and convergent structuring • easy transition to EIS • expert systems • artificial intelligence
• idea processing & associative aids • multimedia connectivity platforms • object oriented programming
iterative and consists of nested and intersecting process loops. It includes; the issue requirements definition loop, involving cycles between structuring the strategic issue and defining its requirements, the support services definition loop, involving cycles between defining information support services and decision support services, the prototyping design and delivery loop, where the design prototype iterations are nested in a delivery process that is also prototyped. The process iterates with the support services definition loop and the institutionalization loop, which includes both adoption and diffusion in addition to the establishment of an issue-tracking system which iterates with all other loops.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
234 Kamel
PROJECT DESCRIPTION Overview of the Project
The decision-making process at the Cabinet addresses a variety of socio-economic issues such as balance of payment deficit, high illiteracy rate, housing, health, public sector reform, administrative reform, debt management, privatization and unemployment. The decision-making process involves much debate, group discussions, studies development and is subject to public accountability and media attention (El Sherif & El Sawy, 1988). Figure 1 shows a model of the Cabinet of Egypt decision-making process prior to the establishment of IDSC including key participants and the information use within the context of the Cabinet decision making. The Cabinet decision-making process is usually seen in terms of its mission, objectives, and outcomes. However, extensive observations revealed that the decisionmaking process is better viewed by its stake holders as a process of attention to sets of issues with varying and changing priorities. These issues circulate continuously and are liable to political manoeuvring and situation changes until they are managed over time. The issues are usually complex, ill-structured, interdependent and multi-sectoral with strategic impacts at the national, regional and international levels. The decision-making environment at the Cabinet level could be characterized by being data rich and information poor due to the lack of proper data analysis, the isolation of the information and decision support experts from the decision makers, the use of computer systems as ends rather than tools supporting in decision making and the focus on technical issues rather than on decision outcomes (Figure 2).
Management and Organizational Concerns
IDSC’s mission is to provide information and decision support services to the Cabinet for socio-economic development and to improve the country’s managerial and technological infrastructure through enhancing the decision-making process. To realize its mission, IDSC objectives included: developing information and decision support systems for the Cabinet and top policymakers in Egypt, supporting the establishment of decision support systems/centers in different ministries, making more efficient and effective use of the available information resources, initiating, encouraging and supporting informatics projects to accelerate managerial and technological development of various ministries, sectors and governorates. IDSC interacts in three main directions for data accessibility and information dissemination. The first direction represents the Cabinet base where information and decision support systems are developed to support the strategic policy and decisionmaking processes. The second direction represents the national nodes where IDSC links the Cabinet with data sources in Egypt within different ministries, public sector organizations, academic institutions and research centers. The third direction represents the international nodes where IDSC accesses major databases worldwide through state-ofthe-art computing, information and communications facilities. IDSC’s scope of activities extends to four levels: the Cabinet, sectoral, national and international. At the Cabinet level, it provides information and decision support, crisis management support, data modelling and analysis, multi-sectoral information handling and databases developCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
DSS for Strategic Decision Making 235
Figure 1. The Cabinet decision-making process before IDSC President
Ministerial Committees
Parliament Issues
Ministers
Policy & Directives
Cabinet
Cabinet Affairs
Legislation
Minister of Cabinet Affairs
Prime Minister
New Issues Old Issues Policies Programs
Agenda of Cabinet Issues
Priorities
Ad-hoc issues
Government Agencies
Ministers Information Requests
Government Agencies
Universities
Sources of Information
ment. At the sectoral level, it provides technical and managerial assistance in the establishment and development of decision support centers/systems, advisory and consultancy services, and sectoral information systems development. At the national level, it provides assistance in policy formulation and drafting, legislative reform support and in the technological infrastructure development. Finally, at the international level, IDSC acts as a window for technology transfer to Egypt, establishes decision support Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
236 Kamel
Figure 2. Supporting strategic decision making through information and decision support systems
Cabinet Affairs
The Cabinet Level
Mission Statements ý& Policy Directives
Cabinet Committees The Cabinet
Economic Reform Program & Objectives
The IDSC Management Level
IS/DSS Project Definition
Strategic Issue Set Elicitation
Design & Delivery of IS/DSS
The IDSC Builder & Implementor Level
Institutionalization of IS/DSS
systems models for developing countries and formulates cooperation links and communication channels with international informatics organizations. The organizational structure was developed to fit IDSC’s environment of operations in terms of the required human and technological infrastructure to accommodate such a dynamic environment. It comprises three hierarchical levels, the top management Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
DSS for Strategic Decision Making 237
and executive level, the experts level consisting of two remote teams; the crisis management team and the priority assessment and quality control team which are formulated from IDSC’s experts in addition to other consultants and researchers in different disciplines related to the organizationís activities. These teams on the one hand support requests coming to IDSC and set priorities for their implementation and on the other hand, monitor, follow-up and assess the performance of different departments throughout the development and implementation of various activities. The third level represents IDSC’s different departments interacting in parallel to realize projects design and implementation, data collection and verification, issue formulation and decision support, training of staff, and other administrative functions. They include: (a) decision support services to respond to different information and decision support requests including the identification of users needs, issues formulation, definition of information and decision support requirements and the provision of possible sets of solutions and recommendations to these issues; (b) information resource management focusing on information systems design, development, installation and maintenance; (c) projects development to plan, control and manage various projects implemented; and (d) human resource development to train the organization’s staff as well as the staff of various partner organizations with which IDSC has joint projects.
Technology Concerns
IDSC evolved quickly from three people in 1985 to over 1,000 managerial, technical and administrative personnel in 1996. The staff is composed of highly trained and qualified professionals including over 150 PhDs, 280 master’s degrees and 75 diplomas. Moreover, over 100 staff members are currently engaged in post-graduate studies. The staff’s background varies in areas of interest to social sciences and technological issues which facilitated IDSC’s mission in meeting its different objectives. IDSC’s technical infrastructure witnessed a dramatic growth both in quantity and quality. Thus, from one site of 200m2 and 3 XT 8088 personal computers in 1985, the organization’s technical resources now occupies 12 sites across the city of Cairo, with accessibility to a heterogeneous, well integrated, multi-platform environment that comprises mainframes, several high-end UNIX workstations, and a multitude of personal computers and Macintoshes all connected using different connectivity protocols with seamless Internetworking between the different sub-networks. Furthermore, the organization has become a leading government entity in introducing Internet in Egypt with its ability to provide several services and Internet Protocol addresses of its class B set. The profile of human resources comprises the traditional mix of titles in most informatics organizations such as programmers, database administrators, application developers, systems analysts, decision support systems builders, network specialists, end users, consultants, trainers, technical group leaders and project managers. However, based on the organization’s experience in designing and delivering a large number of informatics projects in a rapidly changing environment, such mix of titles was not useful to fulfil the requirements of its projects and activities. In that respect, IDSC characterized its human resources profile in terms of its organizational roles where there was a mixture between technical and non-technical roles in addition to a hybrid representing selected individuals who had acquired a combination of technical and nontechnical skills. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
238 Kamel
The role mix was multi-dimensional and helped capture both the organizational and cultural impacts caused by the introduction, use and diffusion of information technology. Therefore, IDSC started to characterize its human resources profile through more differentiated roles that were defined through an intersection of client, organizational, cultural and information systems role demands which later on helped manage the career path development of IDSC staff which was respectively translated into a remarkable improvement in the organizational performance (El Sherif & El Sawy, 1990). Throughout the development phases, a set of human resource management strategies were implemented to assess the improvement of the staff’s skills and knowledge and to leverage IDSC’s capacities to formulate world class information systems professionals. These strategies included; cross-fertilization through visiting expatriates, consultants and experts, tracking emerging technologies, and networking with leading centers of excellence; focusing on promoting and financially rewarding customerresponsive skill hybridization through the development of two-tiered teams which led to the continuous and smooth communication with various customers; intensively investing in the staff’s growth through academic and professional development; fostering expertise infusion and diffusion through in-breeding by carefully selecting the staff and incubation through encouraging the staff to spin-off new projects and ventures and finally; formulating excitement over the information systems career as part of the organizational culture which has resulted in creating a positive image of information systems careers across the country at large. The most direct and cumulative impact of the human resource management profile was reflected through three main indicators; the information systems turnover rate was 2% in a locally competitive environment, the education and human resource development program, apart from the staff members pursuing graduate studies, has gone up from 72 training hours in 1987 to 250 training hours in 1995 and finally there was a growth in the number of IDSC staff members who are being transformed from a technical or nontechnical focus into a hybridized multi-dimensional focus which helped improve their potential skills and knowledge to better respond to changing user demands.
CURRENT STATUS OF THE PROJECT
As of November 1985, IDSC started providing information and decision support services for the Cabinet positioning itself as a facilitator, integrator and expediter of information from various sources to the Cabinet by providing access to different national and international information sources and databases through cutting-edge computerbased facilities (Figure 3). The design and delivery of the information and decision support systems capabilities was developed using a non-conventional process to be able to accommodate the strategic decision-making environment. It evolved around three main concepts; the need to improve the fit among the users in their decision-making context, the form of support provided and the technologies used, and the use of an iterative prototyping strategy for information and decision support systems design and delivery. While the decision making at the Cabinet revolves around issues, they are translated outside the Cabinet as programs and policies and are handed to IDSC in a missiondriven form such as “we need to build a decision support system to help formulate, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
DSS for Strategic Decision Making 239
Figure 3. The Cabinet decision-making process after IDSC Ministerial Committees
President Parliament
Cabinet
Issues
Ministries
Policy & Directives
Cabinet Affairs
Legislation
Minister of Cabinet Affairs
New Issues
Prime Minister
Old Issues Policies Programs
IDSC
Ad-hoc issues
Ministers Government Agencies Government Agencies
Universities
Sectorial Information Centers International Databases
Sources of Information
develop and monitor the industrial sector strategic and tactical plans”, or in a directive data driven form such as “we want to establish an information base about all companies in the industrial sector in Egypt”. At the IDSC management level, these requests are translated through interactions with policymakers to a set of articulated strategic issues around which information and decision support systems are defined. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
240 Kamel
The design and delivery of such systems are carried out at the IDSC builder and implementer level within the decision support services and information resources management departments where the process goes through a series of iterative prototyping cycles. IDSC’s experience in implementing decision support systems made it clear that managing institutionalization is as important as model building. Thus, it should be explicit, complementary and an integrated process that accompanies systems development and model building (El Sherif, 1990). Managing the design and delivery of information and decision support systems at IDSC follows two phases. First, the implementation phase which including; the identification of policy needs and the full mobilization of human and technical resources to be able to achieve effective response and support, the identification of decision areas and information requirements which deal with the translation of the planned policies into specific issues of concern to the Cabinet, and the formulation of projects with specific goals and dedicated human and technical resources for each potential area of policy and/or decision support. The implementation phase consists of two parts: strategies and tactics (Shultz, Slevin, & Pinto, 1987). The value of education, local development, defining user involvement and obtaining top management and decisionmaking levels involvement is crucial for successful implementation (Keen, Bronsema, & Zuboff, 1982). Finally, the importance of the non-technical component in avoiding implementation failure must be recognized (Ginzberg, 1981). Therefore, IDSC’s project teams were selected to provide fast response, focus on results and actions and consists of two-tiered teams comprising government civil servants, information technology professional and technical staff to deal with administrative as well as technology oriented staff. These two-tiered teams represented one of the key success factors in bridging the application gap between systems builders and applications users. Second, the institutionalization phase consisting of: adaptation, diffusion, adoption, monitoring and tracking, value assessment and evaluation of decision support systems. Adaptation deals with various modifications needed to fit the contextual and cultural characteristics of the environment of application. Thus, IDSC designs and develops Arabised software tools and utilities to support and facilitate the use of various software which represents the cultural interface. Diffusion deals with spreading the use of decision support systems at various organizational levels. Therefore, the IDSC approach is built on developing the information technology infrastructure across all organizational levels through the diffusion of personal computers which represents the organizational interface. Adoption deals with the personalized use of information technology tools and techniques by decision makers and their support staff. Therefore, decision support systems are customized and adjusted to the users’ needs which represents the user interface. Monitoring and tracking deals with the parameters of critical issues, assumptions, priorities and information and decision requirements in addition to the changes in the technology and their impacts on the decision-making process. Value assessment deals with how decision support systems have greatly improved strategic decision making including the tangible and intangible benefits such as improving decision making at the Cabinet level and making better use of the available resources. Evaluation deals with the appraisal, analysis and validation of the value added benefits of decision support systems on socio-economic development.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
DSS for Strategic Decision Making 241
SUCCESSES AND FAILURE
Developing countries represent a challenging domain for information and decision support systems. The characteristics of the country, the problems faced and the opportunities are among the challenges. Examples of these challenges include: the lack of informatics infrastructure, the use and availability of information is still limited, the lack of technical expertise and the application gap between existing information and decision support systems innovations is widening. Moreover, the experience of IDSC in Egypt with regard to the development and implementation of large information and decision support systems projects helped identify new challenges. These challenges relate to: strategic decision making, decision support systems, implementation of decision support systems and its institutionalization process. In strategic decision making, the challenges related to the ill-structured nature of processes extending over long periods of time, the involvement of many stake holders, the need for conflict resolution, consensus building and crisis management, the efficient and effective use of scarce resources, and the turbulent and dynamic environment in which the decision-making process occurs. In decision support systems, the challenges related to managing the development of multiple information and decision support systems, their institutionalization within their application contexts, the development of appropriate interfaces, the availability of tools and generators relevant to different industries, supporting rather than replacing managerial judgements, fast response and prototyping the design and delivery phases. In implementation, the challenges related to the lack of user involvement, inadequacy of model evaluation, lack of problem definition, resistance to change, and the difficulty to diffuse new model-based systems. It also included; untimely, unresponsive and inadequate information and non-responsiveness to user needs, lack of top management support, lack of vital continuous communication, poor documentation, and language problems (Gass, 1987). In institutionalization, the challenges included: overcoming resistance to change, adapting model-based systems to the context of work and formulating documented procedures, managing the process of change, information technology diffusion and adoption and their impacts on the individual and the organization. Managing the implementation and institutionalization of decision support systems required facing all the challenges which represented, within the strategic decisionmaking level aimed at realizing socio-economic development objectives, one of the most difficult and challenging context for implementation and institutionalization due to the messy, ill-structured, dynamic and turbulent environments. The scope of IDSC projects are grouped along four dimensions which encompass its activities. These dimensions are decision support systems for strategic issues, building sectoral decision support centers, management and technological development and information infrastructure development. Following are selected decision support cases that were implemented by IDSC aiming at socio-economic development, structuring the decision-making process and supporting the technological development of the country. In decision support systems for strategic issues, recent economic rebuilding efforts required Egypt to accumulate a foreign debt of about US$33 billion covered in 5,000 loans. These loans needed to be monitored for debt services, payments, terms re-negotiations, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
242 Kamel
interest rates, management of payments schedules and transactions. The magnitude of the debt burden led its reform program to become among the top priority issues at the Cabinet level. Hence, IDSC has initiated, developed and implemented the debt management project aiming at the rationalization of debt utilization, reduction and rescheduling. The project was developed to provide a management tool to support and facilitate the registration, monitoring, control and analysis of Egypt’s debts. Over a period of 18 months a national comprehensive database was developed by IDSC technical staff and was located in the Central Bank of Egypt. The database included the government loans in addition to a transaction-processing system for debt management that was built with decision support systems capabilities to be able to test the implications of different debt management scenarios. Throughout the development phase, a number of problems arose which caused delay and frustration. These problems were related to both technical and language issues. The impacts of the project through the implementation of the decision support system included: the success in the rescheduling negotiations with 14 countries which was smoothly managed through the provision of a solid grounded information support that was made available to the negotiators. Moreover, loans have been viewed ever since as part of a comprehensive, integrated and dynamic portfolio rather than being managed on an isolated case by case basis which was in favor of the economy, its debt rescheduling status. In building sectoral decision support centers, the increasing cost and subsidy on electricity were continuously enlarging the balance of payment and adding to the burden of the economy. Thus, IDSC developed the ministry of electricity decision support system to assess the impact of tariff changes on different income groups, to provide statistical data on power and energy generation including the distribution and consumption of electricity, and to aid in pricing and managing the loans of the electricity sector. A joint team was formed from IDSC and the ministry of electricity. The ministry staff collected data from different sources while the IDSC staff focused on issue structuring, systems and human resources development and more importantly on managing the process of developing and delivering the decision support system. Along the implementation process, a strategic issue emerged that related to drought in the sources of the Nile River. It caused a dramatic drop in the hydro-electric power generated by the Aswan dam and necessitated the provision of millions of U.S. dollars to build three power generating stations. Thus, the ministry of water resources had to take part and contribute to the project since it became a stakeholder in the decision support system design process. The team, as of that stage, included a third group from the ministry of water resources to cover the issues that related water resources to electricity. The impacts of the ministry of electricity decision support system came out mainly in the implementation phase which included the ministry of water resources as a stakeholder in energy generation related issues, the issuance of a new tariff after assessing the possible alternatives generated by the decision support model and the evaluation of their impacts on different income groups. The case showed that implementation and design processes are inseparable and evolutionary throughout all phases. In management and technological development, Egypt formulated a comprehensive plan to improve its development and growth in the world market place using the latest emerging information technologies. The use of information technology aimed at supporting the countries’ socio-economic development programs and boosting its economy through the development of a high-tech industry as well as through increasing its Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
DSS for Strategic Decision Making 243
software development and exportation in terms of technical expertise and software applications through the optimum utilization of its resources, mainly expertise, low-cost labor and a flourishing business and investment climate. The IDSC’s Pyramid Technology Valley program is one such effort to improve technological infrastructure. The program was launched through an international conference held in Cairo in 1989. It aims at creating a suitable environment for Egypt’s high-technology industries with an emphasis on developing electronics and software industries, attracting new foreign investment, assisting in the creation of new opportunities for small business enterprises, and exporting high-technology products to various countries. The expected impacts of the program showed that a developed information technology industry in Egypt could achieve an annual production value of about US$2.5 billion of which US$1.5 billion will arise from exports revenues. Moreover, the program will create annually around 15,000 jobs for a period of five to seven years which could have a direct effect on Egypt’s growth national product (GNP). In information infrastructure development, realizing the enormous impact of information on decision making and its vital role in socio-economic development, IDSC adopted a strategy to restructuring public administration at the governorates level through the use of information and decision support tools and techniques. IDSC developed a comprehensive information base at the governorates level through the establishment, in each governorate, of a Governorate Information and Decision Support Center (GIDSC) to introduce and diffuse information technology, and to restructure the role of the governorates in socio-economic development planning. The project since its inception aimed at rationalizing the decision-making process of the governors through the use of state-of-the-art information technology.
EPILOGUE AND LESSONS LEARNED
Based on IDSC’s efforts in the design and delivery of 600 informatics projects in Egypt targeting socio-economic development, the following is a set of lessons learned that could be generalized to the implementation of decision support systems in similar environment and that represent potential areas for future research. • •
• • •
Structuring of issues is an integral part of the design and implementation of decision support systems dealing with socio-economic development. Providing decision support systems for development planning is often coupled with both urgency and criticality of the issue. Therefore, decision support systems design should allow for crisis management to be able to respond to crisis requests which entails the preparation of crisis teams with their managerial and technical support capable to operate in such situations. Providing decision support systems requires much time and effort in building and integrating databases from multiple data sources and sectors. Developing a decision support system for one socio-economic issue might affect other issues which should be put into consideration during the design phase to save time and effort and avoid duplication of activities. An effective decision support system depends on the availability and accessibility of timely, relevant and accurate information.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
244 Kamel
• • • • • • •
Successful implementation of decision support systems is a necessary aspect but not sufficient condition for successful institutionalization, both processes should be well integrated. Prototyping decision support systems should be reflected during the design, development and delivery phases. Providing decision support systems requires textual and document-based information sources capabilities which should be reflected in the organizational design. Recurring decisions related to certain issues need to be monitored through a management system for tracking the changes in the critical parameters of these issues. Successful design and delivery of decision support systems is based upon top management support during implementation and organizational support during institutionalization. Evaluation and assessment of decision support systems is a vital process that should accompany all phases of implementation and institutionalization in order to realize on-line response to changes occurring in the environment. Continuous multi-level training of human resources leads to the adaptation, diffusion and adoption of decision support systems within various organizations.
In summary, the experience of such new form of information-based organization in Egypt led to the improvement of the decision-making process at the Cabinet level and supported in the socio-economic development programs. The new opportunities developed from the use of decision support systems in such an environment have increasingly contributed to the conviction of the top level policy making level of the advantages of the use of various information technology tools and techniques for development purposes.
REFERENCES
Berman, B. J. (1992). The state, computers and African development: The information non-revolution. In Lewis, & Samoff (Eds.), Microcomputers in African development. Westview Press, Inc. Dansker, B., Hansen, J. S., Loftin, R., & Veldweisch, M. (1987). Issues management in the information planning process. Management Information Systems Quarterly, 11(2). El-Sherif, H. (1990). Building crisis management strategic support systems for the Egyptian Cabinet. Maastricht School of Management Research Papers, 2(2). El-Sherif, H. (1990). Managing institutionalization of strategic decision support for the Egyptian Cabinet. Interfaces, 20(1). El-Sherif, H., & El-Sawy, O. (1988). Issue-based decision support systems for the Cabinet of Egypt. Management Information Systems Quarterly, 12(4). El-Sawy, O. (1985). Personal information systems for strategic scanning in turbulent environments: Can the CEO go on-line? Management Information Systems Quarterly, 9(1). Gass, S. (1987). Operations research-supporting decisions around the world. Operational research ’87. Amsterdam: Elsevier Science Publishers.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
DSS for Strategic Decision Making 245
Ginzberg, M. J. (1981). A perspective model for system implementation. Systems, Objectives, Solutions, 1(1). Goodman, S. E. (1991). Computing in a less developed country. Communications of the ACM, 34(12). Gray, P. (1987). Using group decision support systems. Decision Support Systems, 3(3). Gray, P. (1988). Using technology for strategic group decision making (Working Paper). Claremont Graduate School. Hirschheim, R. A., & Klein, H. K. (1985). Fundamental Issues of decision support systems: A conquestialist perspective. Decision Support Systems, 1(1). Housel, T. J., El-Sawy, O., & Donovan, P. F. (1986). Information systems for crisis management: lessons from Southern California Edison. Management Information Systems Quarterly, 10(4). Keen, P. G. W. (1975). Computer-based decision aids: The evaluation problem. Sloan Management Review, 16(3). Keen, P. G. W. (1980). Information systems and organizational change (Working Paper). International Center for Information Technology. Keen, P. G. W, Bronsema, G. S., & Zuboff, S. (1982). Implementing common systems: One organization’s experience. Systems, Objectives, Solutions, 2(3). Keen, P. G. W., & Scott Morton, M. S. (1978). Decision support systems: An organizational perspective. Reading: Addison-Wesley. King, W. R. (1981). Strategic issues management. Strategic planning and management handbook. New York: Van Nostrand Reinhold Co. Lind, P. (1991). Computerization in developing countries: Models and reality. London: Routledge. Mason, R. O., & Mitroff, I. I. (1981). Challenging strategic planning assumptions. New York: John Wiley & Sons. Moussa, A., & Schware, R. (1992). Informatics in Africa: Lessons from World Bank experience. World Development, 20(12). Nidumolu, S. R., & Goodman, S. E. (1993). Computing in India: An Asian elephant learning to dance. Communications of the ACM, 36(4). Nidumolu, S. R., Goodman, S. E., Vogel, D. R., & Danowitz, A. K. (1996). Information technology for local administration support: The Governorates Project in Egypt. Management Information Systems Quarterly, 20(2). Scott Morton, M. S. (1971). Management decision systems: Computer-based support for decision making (Working Paper). Boston: Harvard University, Graduate School of Business Administration. Schultz, R. L., Slevin, D. P., & Pinto, J. K. (1987). Strategy and tactics in a process model of project implementation. Interfaces, 17(3). Sprague, R. H., Jr. (1980). A framework for the development of decision support systems. Management Information Systems Quarterly, 4(4). Sprague, R. H., Jr., & Watson, H. J. (Eds.). (1986). Decision support systems. Englewood Cliffs: Prentice-Hall. United Nations Educational, Social and Cultural Organization. (1989). World communications report. Paris: United Nations Press. Zmud, R. W. (1986). Supporting senior executives through decision support technologies: A review and directions for future research. In E. R. McLean, & H. G. Sol (Eds.), Decision support systems: A decade in perspective. Amsterdam: Elsevier Science Publishers. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
246 Kamel
Appendix. Questions for Discussion 1. 2. 3. 4. 5.
Examine the organizational implications of the use of decision support systems at the strategic national level and its effects on the decision makers and the decisionmaking process. Identify what changes need to be made at the organizational and decision-making levels to optimally benefit from the output and deliverables of decision support systems. Compare the conventional and the issue-based approach in the design and delivery of decision support systems and their impacts on the decision-making process in the corporate versus public administration domain. Examine the value-added benefits of a decision support system within the context of socio-economic development based on the decision support cases described in this chapter State the future research areas and directions for the implementation of decision support systems in public administration in light of the research findings and lessons learned from the Egyptian experience.
Sherif Kamel is an assistant professor in the Management Department, School of Business, Economics and Mass Communication, the American University in Cairo. He holds a BA in business administration and an MBA from the American University in Cairo and a PhD from London School of Economics and Political Science. His research focuses on management information systems, decision support systems, organizational restructuring, crisis management and information technology transfer into developing countries. He has published a number of publications and has conducted several information systems management consulting and workshops related to information technology and management theories and applications and their implications in developing countries.
This case was previously published in J. Liebowitz & M. Khosrow-Pour (Eds.), Cases on Information Technology Management in Modern Organizations, pp. 168-182, © 1997.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementing an Ambulance Emergency Despatch System
247
Chapter XVI
Emergency:
Implementing an Ambulance Despatch System Darren Dalcher, Middlesex University, UK
EXECUTIVE SUMMARY
This case study charts the story of the problematic implementation of a computerised despatch system for the Metropolitan Ambulance Service (MAS) in Melbourne, Australia. The system itself is now operational; however, the legal and political implications are still subject to the deliberations of inquiry boards, and police investigations. The value of the case study is in highlighting some of the pitfalls and implications of failing to consider the financial pressures and resource constraints that define the (medical) despatch environment. MAS attempted to procure a state-of-the-art emergency despatch and communication system for one of the most complex ambulance systems in the world. However a desire to cut operating costs and improve efficiency, a stormy relationship with stakeholders and users, and the tendency to ignore the trade unions severely hampered the development effort. The imposition of a fixed deadline for direct switchover in such an uncertain environment became a major constraint for the whole project. The case further highlights the politics of procurement, the danger of developing a system without adequate consultation, the risk from outsourcing and privatising vital public services, and the inability of sophisticated technology to overcome human and organisational issues.
ORGANISATION BACKGROUND
The first plans for an ambulance service in Melbourne, the capital of the South East Australian state of Victoria, were made in 1883. By 1887 sufficient funds have been Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
248 Dalcher
accrued to purchase six Ashford litters, which were placed at the police station. In 1889, they were replaced by the first horse-drawn ambulance. This was the humble beginning of the Metropolitan Ambulance Service (MAS). The current-day MAS is responsible for providing emergency medical transport, pre-hospital care, and non-emergency stretcher and clinic car transport services for around 3.5 million people throughout the Melbourne metropolitan and Mornington Peninsula regions, an area of almost 10,000 square kilometres. The Service is the largest ambulance service within the state, with 62 emergency response locations, 763 staff (excluding non-emergency contractors), 56 emergency ambulance teams, and 218 vehicles. It is also responsible for providing air ambulance services throughout the state. The Service is an integral component of the local health care system and, consequently, a significant infrastructure is in place to enable a rapid emergency response and delivery of pre-hospital care to the community. Note that many, but by no means all residents subscribe to the annual Ambulance Membership Scheme, which entitles them to utilise the full services offered by the MAS for free. In principle, non-subscribers are expected to pay for such services. The objectives of the Service as outlined in the Ambulance Services Act of 1986 are as follows: • • • • •
“to respond rapidly to requests for help in a medical emergency; to provide specialised medical skills to maintain life and to reduce injuries in emergency situations and while moving people requiring those skills; to provide specialised transport facilities to move people requiring emergency medical treatment; to provide services for which specialised medical or transport skills are necessary; and to foster public education in first aid.”
SETTING THE SCENE
In common with most ambulance services around the globe, emergency operations represent the primary function of MAS. In an average day, MAS ambulances attend more than 600 medical emergencies and are also involved in transporting around 400 patients. Not surprisingly, there is a public expectation that the Service will provide a timely, appropriate, and professional response to all calls for emergency assistance. To this end, the Service employs a range of resources to ensure that the best possible response is provided to each case based upon an assessment of the urgency and clinical condition of the patient involved in each emergency event. Emergency ambulances are despatched according to the information received from callers. Each call is assessed and given a priority code. Code 1 is a time-critical emergency, so that the despatched ambulance proceeds with lights and sirens. Code 2 is an acute, non-critical case and the ambulance proceeds without lights and sirens. Code 3 is a nonacute or routine case. The Metropolitan Ambulance Service had undergone a turbulent period since the late 1980s, with sustained criticism over poor ambulance response times, highlighted by a number of events receiving extensive press coverage. Increased competition from the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementing an Ambulance Emergency Despatch System
249
Figure 1. Operating results, 1991-92 to 1996-97 ($ million) deficit
surplus 0.6
1996-97 (a)
1995-96
1994-95
-2.1
-6.9
-3.1
1993-94
1992-93
-8.2
1991-92 -10
-3.5 -8
-4
-6
-2
0
2
$million
Source: Auditor General’s report (1997)
private sector further eroded trust in the MAS. The Service’s financial performance had been poor, with deficits recorded every year throughout most of the 1990s. Figure 1 illustrates the operating results of the Service since 1991-92 (note that the surplus in 199697 can be attributed mainly to additional government funding of $8 million to the Service). The intense public scrutiny required local government to consistently increase its annual contributions, but did not lead to any direct improvements (see the Auditor General’s report). The financial position of the Service remained a concern for the Government. The MAS had an annual budget of over $A60 million. However, the relatively precarious liquidity position, due to low cash flows, had contributed to difficulties in meeting creditor commitments. The Auditor General considered that the Service was at a disadvantage in maximising its revenue opportunities primarily due to: • • •
The failure of the Department of Human Services to allow the Service to recoup the full cost of ambulance transport provided to non-subscribers to its Ambulance Membership Scheme and to persons insured by private health insurance schemes; Declining revenue from the Ambulance Membership Scheme due to the inability of the Service to compete on equal terms with private sector health insurers providing ambulance coverage; and The provision of free ambulance transport to pensioners and health care card holders without full recoup from government of the cost to the Service of providing such services.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
250 Dalcher
Until relatively recent times, inadequate attention has been given to the clinical expertise and ongoing competencies of ambulance officers. Although the Service had a Medical Standards Committee, this Committee had not been in a position to effectively monitor the provision of clinical services in the field. Industrial relations between MAS management and the ambulance unions have always been poor. The last 15 years had been punctuated by strikes, mistrust, and tension. The press coverage of ambulance inadequacies in a number of high-profile cases, and the willingness of individual crew members and union officials to provide the press with commentary and quotes further exacerbated an already strained relationship. The story begins in 1992 amid the climate described above. In terms of technology, the MAS was utilising a variety of systems which were not integrated. However, 1992 became a monumental year for the MAS due to political changes: the election win by the more aggressive Conservative Government in 1992 represented a shift to a campaign against “wasteful financial policies” of their predecessors. A 1992 review of the Metropolitan Ambulance Service proposed significant improvements, the main goal of which was to generate cost savings in response to the concerns of the Government over the level of contributions to the service. The review suggested structural and management re-focusing alongside improved technological systems to strengthen the despatch of ambulances. The Government replaced the MAS committee of management with a CEO with no experience in emergency services whose brief was to cut costs and break the union. (This is despite a number of coronial inquiries resulting from the high-profile cases that recommended increased staffing.) The new CEO, John Farmer, intended to change the way the MAS operated through the utilisation of computer technology in two areas: management of emergency calls and management of finances. In 1993, the contract for the emergency system, totalling $A32 million was awarded to X-consultants by the Victoria State Government. Awarding the contract was a prolonged and traumatic period. During the negotiation process, the consultant acting for the Government was also employed by X-consultants. Two consulting groups were hired in the lead-up to the signing of the contract with X-consultants. Following the submission of bids, major changes to the proposal evaluation criteria, which favoured the bid put forward by X-consultants, were added at the behest of senior ambulance service personnel. By March 1994, the opposition started calling for an inquiry into the tendering process, the appointment of senior personnel, and the high cost of consultant fees. The media responded with significant coverage of delays in ambulance attendance and potentially related deaths.
CASE DESCRIPTION
The proposed system was meant to represent the state of the art in emergency despatch and communication. The planned components were: •
A despatch system for the automatic despatch of nearest or the most appropriate vehicle;
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementing an Ambulance Emergency Despatch System
• •
251
A satellite-based vehicle location system, supported by a computerised mapping system; Mobile data terminals to replace voice radio communication.
Moreover, the system appears to be the first recorded attempt to computerise and completely privatise an emergency despatch system. Implementation was scheduled to proceed in a Big Bang manner, with a switchover to the full system scheduled for 24 August 1995. The fixed deadline imposed by the clients, without negotiation, became a major constraint on the project and proved to be a difficult hurdle for X-consultants. What needs to be borne in mind is that the MAS represents one of the most complex ambulance systems in the world, covering 3.5 million people over 10,000 square kilometres. By May 1995, it became clear that X-consultants was unable to meet contract deadlines, while the media uncovered evidence of frequent system shut downs. An independent consultant’s report earlier in 1995 identified more than 50 faults labelled “critical” or “high priority”. While this was happening, the contract for providing all emergency services (fire, police, ambulance) state-wide was awarded to X-consultants. MAS switchover took place as originally scheduled, with a number of faults identified in the consultant’s report still outstanding. Key problems included: • • • • •
The volume of information handled by the automatic vehicle location system was causing it to bog down and report incorrect locations. Allocation of a vehicle to a job could take up to two minutes, during which the console was “locked up”. Responses taking longer than 16 minutes were required to generate exception reports, but the system’s measurements of its own response times were not accurate. The report generator, responsible for producing exception reports, contained critical faults. While no statistical tests had been applied, it was observed that even on simple tasks, the automatic route recommendation facility often recommended routes heading in the opposite direction from an accident scene.
Within a few days of switchover, MAS officers began complaining to X-consultants and a heated row developed between the two organisations. MAS internal documents recorded the fact that the system often “teetered on the edge of disaster”. MAS claimed that X-consultants had promised to deliver things that they never had a chance of delivering. In a memo to MAS, the X-consultants’ CEO apologized for several issues, including service standards, persistent problems, and despatch errors. The failure to meet the (already significantly relaxed) performance criteria blossomed into a fully fledged legal dispute, which eventually saw MAS taking legal advice on the possibility of terminating the contract. Later investigation revealed that politics played a major part in the complex relationship between the state Government (who had cut costs by using X-consultants), X-consultants, the system’s suppliers (who agreed to an unrealistic timeframe in which to introduce the system and who were about to provide similar systems for the fire and police services as part of a deal), the ambulance union (who would have liked their Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
252 Dalcher
members to do the despatching, rather than non-paramedic X-consultants’ civilians), and MAS management (caught between the other three). Within two weeks of switchover, MAS was complaining to the Health Minister. Government advisers suggested the ministers should claim that performance was improving. Public pressure led the Victorian Auditor General to approve an extensive performance audit of MAS. The Labour opposition released an X-consultants log in October that showed despatchers repeatedly complaining that ambulances couldn’t be picked up by the system, and that it was locking up. In December, the opposition called for an inquiry into the use of public money in the X-consultants deal. The Government was eventually forced to withhold contract payments until contracted requirements were met by X-consultants. In February 1996 the head of the MAS told a ministerial steering committee that he “had evidence that the services provided to the MAS…are progressively getting worse” and “the service was sub-standard and worse than what was previously provided at the old Communications Centre”. The contract with X-consultants cost the Ambulance Service $38.7 million from when it commenced in 1994 to June 1996, well above the original estimate for the full seven-year period. An incident in August 1996 attracted particular attention. An emergency call to an address on South Street reported that a man was unconscious and thought to be suffering from a drug overdose. The despatch system failed to recognise the street name, and “corrected” the address from South Street to Sadie Street. The operator failed to notice the error, so that the wrong address was passed to the ambulance crew. The ambulance eventually reached the right address, but too late, and the patient died. It is significant that the system was still known to be confused between the two streets 10 months later. MAS ambulance performance standards for serious cases stipulate that an ambulance should be despatched within 150 seconds in 80% of cases. The actual figure for November 1996 was 78.7%. The introduction of a new question-and-answer routine (AMPDS) for call takers in December 1996 resulted in this figure plummeting to 34.2%. Early the following year the figure crept up to 36.7%. A commission of inquiry later discovered that calls were often re-started to boost performance time statistics. The non-emergency despatch system, also designed by X-consultants, to deal with non-urgent patient transfer was also showing troubling signs. The standard dictates that under no circumstances can the system be allowed to take more than four hours to despatch a non-emergency ambulance. A 1996 ambulance memo documented an alarming pattern of lengthy delays, putting the service on the verge of collapse, with many cases exceeding nine hours. A Royal commission subsequently received testimonies that non-emergency calls occasionally got priority over emergency calls, on the basis of monthly performance levels against standards. In January 1997, the head of the MAS complained to the Department of Justice that the MAS were asked to lower their required performance standards rather than expect X-consultants to improve the performance of the computerised system. MAS, he said, “was of the view that X-consultants do not really understand or have come to grips with operating in an emergency services organisation environment, and are unlikely to do so unless there is a fundamental shift in approach”. A month later, he complained that it was “no longer acceptable for MAS to rely on X-consultants to eventually ‘get it right’”. In July, he was complaining to the head of the Bureau of Emergency Services TelecommuCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementing an Ambulance Emergency Despatch System
253
nications about six months of extreme pressure from X-consultants over the reduction of standards. When the use of imported “trouble-shooters” failed to alleviate the problem, MAS and X-consultants embarked on a protracted series of negotiations, culminating in agreement to lower the benchmark performance standards in return for reduced payment. In March 1997 the Public Bodies Review Committee and the Department of Finance advised against the Government’s decision to enter into a contract with X-consultants. The Auditor General’s special emergency report on the Metropolitan Ambulance Service contractual and outsourcing practices was published in April 1997; that was several months earlier than originally scheduled because of the “discovery of extremely serious matters identified during the course of a performance audit”. It enumerated serious deficiencies and highly dubious practices, including huge cost blow-outs, entangled relationships between managers in the service and private companies, a biased tender, and inadequate supervision of contractual decisions by MAS management (who were said to have been “derelict” in their responsibility). Major flaws identified included: • • • • • • • • • •
Reliance on an unknown consultant without a formal contract and with no attempt to establish past experience or association; Acceptance of the needs analysis submitted by the consultants in charge of the tendering process, despite serious reservations about the quality of the analysis; Serious functional and technical flaws and shortfalls in the original tender document, which was obviously written around the X-consultants system; The system specification document, which was hurriedly developed, was known to contain major shortcomings, yet was fully utilised by the service; The absence of documentary evidence to substantiate how the 34 registrations of interest were short-listed to four potential suppliers; The inability of the service to produce the evaluation criteria used in selecting X-consultants and the informal approach used to eliminate the remaining bidders; Failure to adequately satisfy a key condition set by the Government for the new call and despatch system to be able to integrate into the state-wide emergency system, irrespective of the eventual supplier; Failure to achieve the projected savings of $20 million; Retrospective approvals freely granted by management for payment; Easy acceptance by MAS management that all was well.
The audit also uncovered a technical memorandum that had been sent to the MAS CEO by the Service’s manager of information systems, prior to the awarding of the contract, expressing the depth of his concerns and reservations about the proposed system and the selected contractors. It called for: •
•
Withdrawal of the specification, as it did not cover MAS requirements, and a delay of four weeks to enable a team of specialist staff with expertise in communications, information systems, technical services, ambulance procedures, and despatch systems to review and redraft it; Review and redrafting of the schedule, and especially of project milestones and phases to obtain a realistic schedule with “practical implementable stages”;
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
254 Dalcher
• •
Review of the short-listed suppliers; Appointment of a project manager, ideally with skills in computer-aided despatch and related technical areas.
Minutes from subsequent meetings reveal that these points were discussed and were shared by all key personnel with the exception of the chairman. These concerns were brushed aside and the tender process was allowed to continue unchanged. In the Australian of 2 April 1997, Rachel Hawes reported that: The Victorian Government has ordered a police investigation into the State’s ambulance service, including the controversial X-consultants communications system, after the State Auditor General found irregularities in contractual and tendering arrangements. The State Minister for Health, Rob Knowles, revealed last night that a draft report by the Auditor-General, Ches Baragwanath, into the Metropolitan Ambulance Service had raised matters of ‘serious concern’ to the Government. Mr. Knowles said the report had been sent last week to chief police commissioner, Neil Comrie, who would oversee an investigation into Mr. Baragwanath’s findings. Following the publication of the Auditor General’s report, the systems support manager resigned. The former MAS boss was heavily criticised for failing to take a more proactive stance. Ambulance officials who evaluated the X-consultants’ bid called for a full-scale Royal Commission, claiming the tender was “rigged” and the process a sham; later evidence appeared to confirm both claims. An investigation shortly afterwards by The Sunday Age newspaper unearthed further details, including allegations of corruption and interference in the tendering process: • •
• • • •
The head of the consulting group who assisted in developing and overseeing the tender ended up as CEO of X-consultants, the winners. (It was also subsequently revealed that the former MAS CEO had connections with the consultancy firm.) Members of the MAS tender evaluation team each ranked all the bids as part of their final evaluation. The complete report ranked Fujitsu first, and X-consultants last, with the lowest score. Two days later, the team was redirected by a consultant representing MAS management to change the evaluation criteria and rewrite their final report. As the X-consultants bid was particularly weak on the system’s architecture, reliability, back-up, and security aspects, team members were asked to remove architecture and reliability from the list, and change back-up, security ratings, and several other items that did not favour X-consultants. Members of the evaluation team were directed not to inspect the operating sites of X-consultants’ competitors. When serious problems became apparent during testing, team members were instructed to abandon testing. To avoid further embarrassing crashes, the team was only allowed to ask questions, but was denied access to the equipment. Warnings by the head of the MAS ambulance communications department about the safety risks in the X-consultants system were ignored. Additional concern over lack of medical knowledge among the X-consultants staff (including operators) was identified as a major safety risk.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementing an Ambulance Emergency Despatch System
•
255
“The timeframes were deliberately short so the due diligence couldn’t be done properly. They got a system that will never work, that is flawed in its basic design”.
While the opposition suggested that gross incompetence by the Government had cost lives, the ambulance employees union branded their three-year campaign to declare the system incompetent or corrupt, a success. Internal Health Department documents released under the Freedom of Information laws (in March 1998) revealed that an assessment by an independent accounting firm conducted in April 1996 uncovered additional financial concerns. The consulting group overseeing the tender was hired on a $A45,000 contract, but was ultimately paid $A1.4 million, without providing appropriate documentation of costs. In addition to the head of this consulting group (who joined X-consultants as the new CEO), the report also identified a connection between the former MAS chief executive officer and the consultancy firm. Further revelations continued to emerge. Following the signing of the contract, senior MAS managers started travelling to the U.S. to get “better acquainted” with X-consultants’ operations (this was meant to happen as part of the bid evaluation stage, but was pointless subsequent to the signing of contracts). Middle management training for the new system was provided in the form of threeday, line-in courses, described by participants as “mini Camp Wacos”. Participants who expressed opinions critical of new management policies were subjected to emotional and personal abuse, which left them exhausted and traumatized. As explained earlier, the new CEO, John Farmer, was pursuing a two-pronged policy designed to improve despatching, and to tighten management and reduce costs in other areas. As part of the latter, the MAS Regional Training Unit was closed. Ambulance officers were coerced into improving their knowledge and skills in their own time. Government ministers denied any knowledge of problems or the possibility of impropriety. The Minister for Health claimed that she “acted under the knowledge available to her”. The opposition managed to uncover minutes of meetings between Government ministers and ambulance officials. The revelations refuelled a hot political debate about the ambulance issue and the degree of ministerial involvement in the wake of earlier denials. The Minister for Health, who initially asserted that no documents existed, conceded that they were “inadvertently lost” before reaching her, and hence her lack of knowledge about the affair. In response to the Auditor General’s findings, the Government initiated a police and a Queens Counsel inquiry into the contracts. The Government’s solicitor managed to block the release of sensitive documents on the grounds that they “would be reasonably likely to prejudice a Victorian Police investigation of a possible breach of law”. In May 1998, a police report compiled by the Major Fraud Squad was submitted to the Office of Public Prosecutions. The report identified possible criminal offences (including collusive tendering, conspiracy to defraud, obtaining financial advantage by deception, false accounting, misfeasance in public office, and perverting the course of justice) in the MAS procurement and contracting process. The homes of the former chief of the MAS, his deputy, and the former Government consultant were raided by Police following the disappearance of documents containing warnings about the X-consultants deal. The volume of missing documents had seriously hampered police investigations. (The Minister for Police subsequently resigned his position.) The Director of Public ProsecuCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
256 Dalcher
tions subsequently decided that there was insufficient remaining evidence to bring criminal charges. An independent auditor was appointed to investigate claims that the poor response times were boosted by phantom emergency calls to show response improvement in the achievement of performance targets. X-consultants staff were systematically ordered to place emergency calls that were answered immediately, thereby reducing average delay times and improving performance. X-consultants alleged that the purpose of the calls was simply to “test the system”. This is despite the fact that the [Automatic Call Distributor] system would only be effectively tested by a large number of calls rather than a single identifiable call every half hour. Independent reports confirmed that X-consultants “artificially manipulated” response times through fake calls, and the commission of inquiry subsequently received evidence of a systematic and regular effort directed from “above” leading to hundreds of calls on operational lines. (The Royal Commission reported that measurement of performance figures and response times was unreliable and ignored a portion of the waiting time. The registered improvement resulted in the release of $A371,000 withheld initially for failure to meet targets.) The opposition’s health spokesperson, John Thwaites, complained in Parliament that the Government’s management of the ambulance service had been characterised by bungling, corruption, and cover-ups. He added that “the Government had got into bed with shysters and crooks and been played for a sucker, and it had been robbed blind by contractors and consultants”. In August 1998, giving evidence before a Victorian Civil and Administrative Appeals Tribunal, a former member of the X-consultants bidding team said that the company’s bid was bogus and relied on political party connections. He elaborated that the despatch system had no chance of working because of a major software problem; that it was not compatible with the automatic location and communications software, thus forcing ambulances to use the existing two-channel communication system; that the team ignored these obvious faults; and that the team knew in advance that they would win the bid, and were supplied with “excellent intelligence” about competing bids. “The ambulance service was never informed of the problem and was also misled about the cost of the contract”. The witness had lost his job after expressing his concerns to his superior, who was planning to become the future minister for police and emergency services. In December 1998, the tribunal ordered the state to release all documentation pertaining to the MAS despatch system, including a secret report about X-consultants’ role. The ruling was based on the Freedom of Information act and stipulated that the release would not prejudice future litigation against companies involved in the ambulance contract. The state Government lodged an appeal against the tribunal order in an attempt to keep secret the X-consultants’ report. Public airing of the report has thus been further delayed, pending the result of the appeal. State lawyers subsequently argued that the key document should be exempt from release on the grounds that they “would be reasonably likely to prejudice a Victoria Police investigation of a possible breach of law”. The Attorney General blocked the police from releasing controversial documents. Police officers were also directed to consult with Government agencies before releasing any documents or commenting further. Some of the agencies were implicated in the documents. A two-year police investigation found a “prima facie case of misconduct in public office”. The documents suggest that a state minister misled Parliament and that GovernCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementing an Ambulance Emergency Despatch System
257
ment officials hampered police investigations. In addition they allege that the state Government illegally covered up improprieties by suppressing key documents on the eve of the 1996 election. The Government has also tried to argue that the documents are now irrelevant, as the ministers concerned no longer held public office. In an additional bid to block the release of information, the state financed private legal action by ministerial advisers and officials implicated in the documents. A Supreme Court judge accused state lawyers of “committing one of the worst abuses of the court process”. The MAS computerisation has resulted in a less efficient ambulance service plagued with technical hitches, loss of experienced staff, and low morale. The performance of the emergency despatch system has caused much public concern, as faults in the system have been implicated in a series of misadventures in which people have died while waiting for delayed or misdirected ambulances. To date, there have been at least 10 coroner’s court inquests into deaths involving serious delays. In a recent case, the deputy coroner, Mr. Iain West, explained that “it was impossible to say whether (the patient) would have survived even if an ambulance arrived promptly…but delays in sending an ambulance were disturbing and unsatisfactory”. He proposed that the MAS should conduct a detailed examination of call taking, communication, and despatch systems. Rod Morris, the Victorian secretary of the Ambulance Employees Association (AEA), is on record saying: “The worst aspect of the X-consultants affair is not the cost, but the fact that the Government lost control of essential services, and as a result people died unnecessarily”. While in opposition, the Australian Labour Party had indicated that the MAS contract is likely to be “legitimately cancelled” without penalty, should they win the next election. Following recent developments they also promised to hold a Royal Commission, as the “very bad smell” around the affair was growing each day. Following a win by the Labour party, a wide-ranging Royal Commission started an investigation into the affair. The commission appointed the former head of the Major Fraud Squad as its chief investigator. The Royal Commission is focusing on improprieties and corruption, including the political links that secured the contract and the illegal release of details of competing bids. The investigation into all aspects of the case had to take longer than anticipated, resulting in four extensions, as the commission collected thousands of documents, with the resulting costs rocketing from $A5 million to $A17.3 million despite constant, and often controversial, reduction in the key terms of reference. In a new twist, it appears likely that criminal charges (false accounting under the crimes act against one employee, over the test calls) may follow the inquiry. However, the original charges against the procurement process have been dropped. It thus appears that it is the cover-up rather than the original sin that is likely to get punished. In a further development, X-consultants announced that they would not seek a renewal to the MAS contract when it expired at the end of 2002, as it no longer represented their core business. The Victorian Government was considering re-nationalising the emergency despatch service! (As of September 2002, the state had taken over emergency despatch operations, promising a better and more accountable service.) The Royal Commission into the Victorian Ambulance Service found that the public has experienced significant and unnecessary delays in having ambulance emergency calls answered by X-consultants. The inquiry heard evidence that 000 calls had been either unanswered by X-consultants or left to ring for minutes. Director of X-consultants operations told the inquiry that callers to the ambulance service had waited online for Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
258 Dalcher
minutes and the company had incorrectly recorded the time it took to answer calls for ambulances. While 90% of emergency calls are required by contract to be answered within five seconds, some have taken minutes. However, many of the longer calls have been recorded as taking only five seconds. Moreover, Royal Commissioner Lex Lasry found the company acted illegally and improperly in its handling of the calls. At least one senior X-consultants employee admitted to giving false evidence earlier on in an attempt to defend their record. Several public servants were identified as having engaged in improper behaviour, including the head of the MAS who was immediately relieved from his duties, pending an investigation by the Director of Public Prosecutions. Guy Mack from the Human Services Department will be disciplined, after a finding that he acted improperly by using his position to prevent the release of documents. The Royal Commission has also made significant observations about the cost of the rush to outsource key public services to the private sector. Regardless of the irregularities, and the lack of clearly defined lines of accountability and responsibility, the commissioner found that “a purely commercial contractual model for the outsourcing of emergency telecommunications is inappropriate, as it is not able to sufficiently safeguard the public interest”. The commission concluded that risks from outsourcing included loss of control and accountability, a lack of clarity about who should investigate serious improprieties, and uncertainty about how services would be affected by cost savings, undercutting, and financial misjudgments. “It may be that the best that can be expected of a private contractor to Government are those standards of business practice that an ethical firm is likely to apply in all of its contractual dealings. Even in a well-organised structure, the objectives of the private service provider are not defined by the public interest”. The Royal Commission was satisfied that X-consultants functions would be taken over by a state-run organisation when its contract expired in 2002. Yet they did not believe the problems would be resolved this way. Further recommendations assert that watchdog provisions of the legislation covering freedom of information, whistleblowers, the Auditor General, and the ombudsman should be extended to private companies providing services to Government. The Government has agreed to adopt the commission’s recommendations. Meanwhile, on the strength of the Melbourne system, the same company was awarded the Police Computer System in New Zealand, triggering safety concerns from the unions there.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANISATION Processes Currently Involved in the Delivery of Emergency Ambulance Services
As a result of the Royal Commission, the entire system is likely to be re-nationalised in the very near future. The process that now involves private despatchers will return Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementing an Ambulance Emergency Despatch System
259
to the state, requiring the involvement of and consultation with the relevant unions. The following paragraphs describe the current despatch process. A call for emergency ambulance assistance activates a series of events within the Service which often culminates in the handover of a patient to hospital emergency staff for the provision of in-hospital care. Calltaking, despatching, and communications functions are undertaken by X-consultants on behalf of the Service. The involvement of X-consultants forms only one part of the process involved in the delivery of emergency ambulance services to the community. Telstra is responsible for directing the emergency calls through to X-consultants. In addition, X-consultants is reliant upon external suppliers for the paging system, automatic vehicle location system, call line identification system, mapping system, and the Advanced Medical Priority Despatch System. Figure 2 depicts the various processes involved in the delivery of emergency ambulance services. As indicated in Figure 2, the emergency response process is activated by a person dialling Telstra (000) or the X-consultants calltakers direct on 11440. The call to Telstra, based on which emergency response service is required (police, fire, ambulance, or a combination), is directed to X-consultants communications centre. Once the location and priority of emergency events are determined, they are passed on to despatchers who identify and despatch the nearest or the most appropriate vehicles in accordance with the Service’s despatch protocols. Ambulance crews are alerted to a case using a communications facility known as a selcall. Details of the event to be attended are provided by radio and pagers (the latter serving as a back-up communication device in the despatch process). Upon ambulance arrival, the patient is accessed, evaluated in terms of their clinical condition, provided with clinical services, where necessary, and transported, if required, to the nearest appropriate hospital. Figure 3 outlines the activities performed by despatchers. Ambulance Employees Australia has claimed that a new system was not actually necessary at the time of its introduction. In December 1994 the Minister for Health received a report on ambulance services that showed a despatch time of 135 seconds was being achieved in the ambulance service and there was expectation that the despatch time in 90% of cases would be reduced to 60 seconds over the following year. According to the AEA it is currently common for despatch times to be in the order of 10 minutes. It is important to remember that the contract was initiated as a development contract, which was subsequently upgraded to ask the consultants to deal with despatch details thereby handling the complete process. The change of contract meant that this became the world’s first fully privatised emergency call system. Critics maintain that the privatisation was a combination of a political need to privatise and a desire to reduce the power of the ambulance union, following numerous outbursts that accompanied high-visibility cases. Whatever the reason, the process was conducted with little consultation with the unions and with blatant disregard for the Government’s outsourcing guidelines. The irregularities have been addressed by the Royal Commission and are likely to be dealt with by the criminal system. The repercussions in terms of the need to deal with the unions may become an issue again as the system is re-nationalised, and input from operators and members becomes essential to the efficient operation of the system. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
260 Dalcher
Figure 2. Processes involved in the delivery of emergency ambulance services PATIENT IS IN NEED OF AMBULANCE ASSISTANCE ‘000’ is called or ‘11440’ REQUEST FOR SERVICE CALL IS ANSWERED AT TELSTRA CALL RECEIVED BY CALLTAKER AT INTERGRAPH and records details of event ACTIVATION TIME
CALLTAKING CALL PASSED ONTON DISPATCHER who selects nearest response vehicle DISPATCH
RESPONSE TIME REFLEX TIME
UNIT ADVISED OF EVENT TO BE RESPONDED TO ACKNOWLEDGE UNIT ACKNOWLEDGES DISPATCH EVENT AND RESPONDS UNIT ARRIVES AT SCENE: accesses, and begins to treat patient PATIENT IS LOADED
ARRIVAL AT HOSPITAL
PATIENT UNLOADED/TRANSFERRED, DOCUMENTATION COMPLETED VEHICLE/TEAM CLEARED AND AVAILABLE FOR NEXT EVENT
VEHICLE/TEAM AT STATION AND AVAILABLE FOR NEXT EVENT PATIENT LEAVES HOSPITAL
Source: Chart developed by Victorian Auditor General’s Office
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementing an Ambulance Emergency Despatch System
261
Figure 3. Activities of despatchers From calltaking function CHART 4G ACTIVITIES OF DISPATCHERS
Event received from calltaker Event added onton pending event list
Identification of the closest available unit Dispatch of closest available unit
Dispatcher selects highest priority event on list Identify closest available unit
Broaden window to identify closest available unit
Select “No nearby unit”
Select closest available unit
Selcall the closest available unit
Page the closest available unit
Voicecall closest available unit
Identify next closest avaialble unit
If no response within 90 seconds
If no response in 60 seconds selcall, page and voicecall on communication channel
Advise Duty Team Manager
NO
Response from closest available unit
YES
Give unit event details Obtain acknowledgment of event Change unit’s status to acknowledged To event management
Source: Chart developed by Victorian Auditor General’s Office from the Service’s Standard Operating Procedures Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
262 Dalcher
Many Other Major Challenges Still Remain
The computer system was designed to transfer information quickly to mobile computer terminals on board emergency vehicles, thus eliminating the need for radio directions, which can be misinterpreted. This equipment has never been delivered. The adoption of this technology would assist in reducing the time taken to despatch ambulances, provide increased knowledge to despatchers, and provide better management information. The reliability of the system has also been called into question, especially under extreme conditions. When several emergencies occur simultaneously, they can result in severe bottlenecks. For instance, during the 1998-99 bushfires, the system became so overloaded that operators were forced to resort to whiteboards to keep track of events. The difficulties with the MAS system and compatibility problems had delayed its integration into the state-wide emergency system, also developed by X-consultants, which was meant to encompass all emergency services. The larger calltaking and despatch system is not operating to its full potential. X-consultants have been arguing that performance standards should be assessed against revised standards. Some of the emergency agencies have been forced to incur additional costs, such as the continued employment of communications staff, previously made redundant, arising from the delays experienced in commissioning the system. Some are involved in re-negotiations with X-consultants for the recovery of such costs. The Auditor General has suggested that the appointment of X-consultants, which submitted the lowest bid, to the multiagency contract relied on a highly deficient evaluation process (e.g., the cost-benefit analysis of outsourcing did not occur until six months of the selection process had already elapsed, and short-listed candidates were not informed of the contents of the MAS contract with X-consultants and the difficulties likely to be expected in integrating that system into the multi-agency system) and suffered from poor quality documentation. In an attempt to improve ambulance response times, a new scheme was launched in May 1998. Trained fire-fighters would deal with ambulance emergencies, as the fire brigade’s response times are faster. If the fire-fighters arrive first, they give the victim emergency treatment until ambulance officers appear. What began as a pilot program in south-eastern Melbourne has now been expanded to the entire metropolitan region of Melbourne and is being monitored by researchers in Monash University’s Department of Epidemiology and Preventive Medicine. The First Responder Programme is the first in Australia and entails a big change in the job description, the training, and the equipment of fire-fighters. For example, the fire-fighters have now been trained to carry out first aid on patients with suspected cardiac arrests. They have also been equipped with and trained to use oxygen therapy and automatic defibrillators, which deliver a shock to the heart. This change also shows the potential impact of decisions related to the adoption of technology on work practices and stakeholders that are not connected to the technology itself; when certain practices or projects fail, alternatives may impact on other agencies and persons outside the original boundaries of the system.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementing an Ambulance Emergency Despatch System
263
ACKNOWLEDGMENTS
The author wishes to thank the numerous contributors and correspondents who volunteered information in various formats. Please note that while the events are real, the names of people and companies involved have been changed to protect their identity.
REFERENCES
Documents of the Royal Commission. (2002). Metropolitan Ambulance Service Royal Commission. Hawes, R. (1997, April 2). The Australian. The Age. (n.d.). The source of dozens of articles about the MAS case. Retrieved from http://theage.com.au/ The Risks Digest. (n.d.). Retrieved from http://catless.ncl.ac.uk/Risks/ Victorian Auditor General’s Report. (1997). Metropolitan Ambulance Service: Fulfilling a vital community need.
Darren Dalcher is a professor of software project management at Middlesex University (UK) where he leads the Software Forensics Centre, a specialised unit that focuses on systems failures, software pathology and project failures. He gained his PhD in Software Engineering from King’s College, University of London. In 1992, he founded and has continued as Chairman of the Forensics Working Group of the IEEE Technical Committee on the Engineering of Computer-Based Systems, an international group of academic and industrial participants formed to share information and develop expertise in failure and recovery. Professor Dalcher is active in a number of international committees and steering groups. He is heavily involved in organising international conferences, and has delivered numerous keynote addresses and tutorials. He is the editor-in-chief of Software Process Improvement and Practice.
This case was previously published in the Annals of Cases on Information Technology, Volume 6/2004, pp. 440-456, © 2004.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
264 Knight, Steinbach, & Graf
Chapter XVII
Seaboard Stock Exchange’s Emerging E-Commerce Initiative Linda V. Knight, DePaul University, USA Theresa A. Steinbach, DePaul University, USA Diane M. Graf, Northern Illinois University
EXECUTIVE SUMMARY
While Seaboard Stock Exchange remains one of the top stock exchanges in the United States, its relative position in the world is slipping. E-commerce is threatening the organization by accelerating the rate of disintermediation and the entrance of new competitors into Seaboard’s market. Against this backdrop, Seaboard’s e-commerce initiative has emerged. Tension between control and experimentation surfaces as the association attempts to incorporate emerging technology while maintaining its traditional way of doing business. The organization struggles to merge new technology with existing IT strategy while internal entrepreneurs strive to shape a Web development methodology and define an appropriate role for standards and controls in an emerging technology environment.
BACKGROUND
Seaboard Stock Exchange is recognized worldwide as among the leading exchanges of its type in the United States in the year 2000. The exchange was founded in the mid19th century by a group of eight businessmen meeting informally in a garden outside of Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
265
Rock Island, New Hampshire, to buy and sell local stocks. For decades, Seaboard remained, in the words of one manager, “a sleepy little regional exchange”, until the mid1970s when it made a strategic decision to become a national exchange. Since that time, Seaboard has expanded to approximately 500 traders handling a wide variety of stock and bond products. The exchange continues its heritage of face-to-face trading in an open arena to this day. Traders meet on an open floor and use hand signals to convey the quantity of a particular stock or bond that they would like to buy or sell. Bids to buy and offers to sell are made by open outcry. When the highest bid meets the lowest offer, the two traders each write down the trade on cards that “runners” carry off the floor to be keyed into a computer system. Although Seaboard remains one of the top stock exchanges in the United States, its relative position in the world is slipping. As Figure 1 shows, trading volume, which had been relatively steady from the earliest days, took a major swing upward in the 1970s when Seaboard decided to “go national”. This dramatic growth continued until 1998. However, volume has been dropping for the last two years. While the worldwide total stock trading market has been growing, Seaboard’s relative market share has declined, as has the price of a seat on the Seaboard Stock Exchange. Declines in Seaboard’s trading volume mean that its member/traders, who charge customers for each trade they execute, are earning less. This situation has triggered a drop in the price of a seat on the exchange. Declines in the price of a Seaboard Stock Exchange seat mean that when members decide to retire or cash out, they reap far fewer dollars than previously (Arvedlund, 2000). The price of a Seaboard seat has dropped from a record high of $905,000 in the late 1980s to $322,000 in late 2000. Seaboard’s membership is concerned about these declines, and has recently moved to trim staff in order to cut costs. These cost cutting measures can be expected to increase cash flow for the organization when it comes to paying its bills to support the trading floor; however, they will not put dollars in the pockets of its owners, simply because of the way the exchange is structured. As Figure 2 shows, the exchange’s 500 members are active owners of the not-forprofit association. The members own their own seats, have trading rights on the exchange, and manage the organization through the system of committees shown in Figure 3. Members also elect the Board of Directors and President. Further, since they are present every trading day, they play an active role in the day-to-day operations of the organization. The exchange employs a staff of about 600 employees, one-third of whom are in the Information Technology Department. The primary responsibility of all staff is to support the trading floor and keep it running smoothly. According to Vance Fernandez, Vice President of Information Technology, “Working at Seaboard means having five hundred bosses. Staff at all levels, including the President, will drop everything if a member calls with a request”. For example, when a member decided that he wanted a statistical analysis of recent energy stock prices, Fernandez immediately reassigned two top people from his most critical project to work on the member’s special request. Members earn their incomes primarily from commissions on trades or occasionally from making wise trades for their private accounts. Since Seaboard is a not-for-profit association of its members, any exchange income beyond that needed to meet costs is banked for future expenses, rather than being paid out as dividends or profits to the member/owners. Profits that had been saved in prior years are now being depleted as Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
266 Knight, Steinbach, & Graf
Figure 1. Seaboard Stock Exchange trading volume 350,000,000
Seaboard Stock Exchange Trading Volume
300,000,000 250,000,000 200,000,000 150,000,000 100,000,000
Note that years prior to 1995 are in 5 year increments
19
19
19
19
19
19
19
19
19
19
19
19
20
00
2000
99
1999
98
1998
97
1997
96
1996
95
1995
90
1990
85
1985
80
1980
75
1975
65
1970
70
1965
60
1960
55
1955
19
19
1950
45 19 50
1945
40
1940
35
1935
19
19
19
1930
30
1925
25
0
19
50,000,000
Seaboard’s annual income drops below that needed to maintain its fixed costs. Seaboard’s current economic dilemma, and the increasing domestic and foreign competition, are causing its members to reconsider their organization’s structure. The members have organized a committee to study the possibility of becoming a publicly held for-profit corporation (Wall Street and Technology, 1999).
SETTING THE STAGE Strategic Use of Technology at Seaboard
Seaboard is among the top floor-based exchanges in the United States in terms of technology. Seaboard’s trading floor was considered state of the art when it was totally rebuilt in 1995 at a cost of approximately $36 million. The floor is supported by an extensive network of fault-recovery mainframe computers, an ATM network capable of carrying voice and data, and a massive telephone communication system. This advanced technology supports the traditional open outcry trading system by bringing orders to the floor, and facilitating reporting of trades. Since 1996, Seaboard has offered electronic trading (Morgan & Perkins, 2000). However, traders, who are also the exchange’s owners, have not supported extending electronic trading beyond evening hours and nontraditional products that do not compete with the traditional trading floor (Osterland, 1998). Industry analysts who support Internet-based electronic trading note that it provides “unprecedented connectivity” at minimal cost (Wall Street and Technology, 2000). It is estimated within the industry that an electronic trade costs less than half as much as a traditional trade to execute. Proponents of traditional trading floors, on the other hand, praise their ability to provide unbiased price discovery. Compared to electronic trading, traditional trading floors, it is argued, provide buyers and sellers with the ability to have their trader or broker
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
267
Figure 2. Seaboard Stock Exchange organizational structure Chairman and Board of Directors
M
M
M
M
Approximately five hundred M M owner/members rule exchange M M M through committees M M M M M M M M M M M M
M
M
M
President
V.P., I.T. Vance Fernandez
Application
Development
Director
V.P. Marketing
Operations Director, Roger Fields
V.P., Corporate Communications Paula Reese
Marketing Manager, Todd Lawson
*Other Vice Presidents
*Other Departments
Manager of Web
Development,
Karen Greene
*Other departments: Legal, Financials, Trading Floor Operations, Physical Plant
Figure 3. Seaboard Stock Exchange governing committees • • • • •
Executive Committee Finance & Audit Committee Floor Procedures Committee Human Resources Committee Marketing Committee
• • • • •
Membership & Admission Committee Nominating & Elections Committee Product Development Committee Strategic Planning Committee Technology & Automation Committee
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
268 Knight, Steinbach, & Graf
use his or her judgment and market insight to make the trade at the best possible price. Not all industry experts agree (Tsang, 1999; Mosser & Codding, 2000). Countries without a stock trading tradition, particularly in Europe and Asia, have opened exchanges that are totally electronic from the first trading day. Thus a small developing country like Singapore with a brand new stock exchange sometimes has a more modern wireless technology infrastructure than Seaboard or its traditional United States counterparts. Such foreign exchanges are not direct competitors because they are trading different stocks. However, all exchanges compete for the same investor dollars. Additionally, some competing United States exchanges have begun moving into other parts of the world through partnerships (Wagley, 2000). A further threat to existing exchanges comes from electronic communications networks (ECNs) (Latimore, 1999). ECNs, using Internet technologies, connect buyers directly to sellers without any intermediaries. In December 1998, the Securities and Exchange Commission (SEC) passed Regulation ATS, which grants Alternative Trading Systems, such as NexTrade™, the ability to become full-fledged securities exchanges (McAndrews & Stefanadis, 2000). By cutting prices, ECNs captured 30% of all NASDAQ trading volume in just three years, and are now targeting other stock market segments (Carroll, Lux, & Schack, 2000). Seaboard’s members generally view electronic trading, ECNs and new high-tech foreign exchanges as encumbered by an inability to leverage the trading insights that Seaboard members employ to get their customers the best prices.
Information Technology Department at Seaboard
The Information Technology Department at Seaboard is divided into two departments, Application Development and Operations. The Operations area includes the data center, network support, and microcomputer support teams. The Application Development area, responsible for all new system development and maintenance programming, is divided into four teams: floor support, order processing, regulatory systems and financial systems. Application Development makes regular use of consultants from topname consulting firms, particularly for the early stages of new system development, and when employing new technologies for the first time. The area has defined a strict methodology for how new development is to be approached, with ample upfront planning and an emphasis on approvals at strategic points in the development process. This methodology, the Planned System Development Process, or PSDP, is depicted in Figure 4. Programmers joke about the PSDP, but grudgingly agree that it does a good job of insuring that new projects do not incur unexpected member criticism late in the development process.
CASE DESCRIPTION Early Web Development Efforts
Seaboard’s first experience with Web development came in 1993, when a small group of microcomputer installation technicians became interested in the Internet. As one member of the group explained, “We didn’t have any experience building systems of any kind, but we had extra time available to experiment”. Most of the group’s first site Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Work Order Request Form Dept. Manager Signature Dept. VP Signature Computer Requirements Committee Authorization Preliminary Investigation Feasibility Report
System Planning
Data Requirements Business Processes Interface Requirements Communications Requirements Systems Requirement Report Dept. VP Signature Dept. Manager Signature
System Analysis
Database Schema Application Schema Interface Schema System Architecture System Design Specification Dept. VP Signature Dept. Manager Signature
System Design
Documentation Review Coding Review Quality Assurance Review Test Plan & Testing File Conversion Plan & Conversion Production Changeover Dept. VP Signature Dept. Manager Signature
System
Implementation
Work Order Request Dept. Manager Signature Dept. VP Signature Computer Requirements Committee Authorization
System Support
Seaboard Stock Exchange’s Emerging E-Commerce Initiative 269
Figure 4. Planned System Development Process (PSDP)
consisted of profiles and personal Web pages of the PC specialists themselves, coupled with a loose collection of pages about the mission, services and policies of the microcomputer support group. During this same time period, one of the exchange’s members, Sam Butler, used free software to construct a public Bulletin Board System. Butler’s BBS made historical financial data on the most heavily traded stocks available
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
270 Knight, Steinbach, & Graf
Figure 5. Business press coverage of e-commerce Business Week
Forbes
Fortune
400 Number of ABI/Inform abstracts mentioning Internet, WWW, e-commerce or e-business
300 200 100 0
1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000
Business Week
0
0
0
2
20
88
197
147
179
370
Forbes
0
1
1
3
11
50
103
87
105
196
266 74
Fortune
0
0
0
3
15
28
58
75
100
202
173
Year
to the general public 24/7. Looking back on that time from the year 2000, Butler said, “I was once a small trader myself, and I appreciate the position of the little guy. If we can help him out with some free data, why not? Of course, I thought making this data available would be good for Seaboard’s image, too”. Roger Fields, the IT Operations Director, recalled later that this time period “gave people a chance to try out the technology”. Interest in the World Wide Web was fueled by the development of the Mosaic™ browser in 1993. As Figure 5 shows, by 1994 the business press as a whole was discovering the Internet. Seaboard’s president picked up on the trend, issuing a directive to create a Web presence for the organization by the end of 1994. The exchange president stated privately at the time “Although there is no member support for using the Internet for electronic trading, the technology still might be important someday in this business”. He then authorized the funding necessary for two microcomputer specialists to officially spend 5-10% of their time on this project. The two developers, anxious to get a site working as soon as possible, concentrated their efforts upon “begging” various departments for content, and then translating that content into HTML as quickly as possible. In the year 2000, when Fields looked back on the site, he described it as “ a basic form of brochureware, with clickable links for the organization’s mission, history and goals”. This site, shown in Figure 6, combined with a somewhat expanded version of the original intranet and BBS, represented the organization’s second-generation e-commerce effort. By mid-1995, two microcomputer technicians were officially given the title of Web Developer, and allowed to spend 50% of their time on Web development, with the other 50% going to their traditional microcomputer support duties. Fields brought in an outside design team to improve the “look and feel” of the site, which up until that time had no graphic artists associated with it. He recalled that “These three consultants were not very technical — they used a WYSIWYG for editing. Their strength was on the creative end”. Both the two Web developers and the external consultants now began to build relationships with departments elsewhere in the organization, particularly marketing and corporate communications. As Fields explains it now, “When the Web developers first sought Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
271
Figure 6. Second generation Seaboard consumer Web site
relationships with these departments, they were just looking for data for the Web site. As the relationships developed, they realized that these other departments could help us establish a vision for the site. At that point, the Web developers consciously began to seek input on the purpose and goals of the site. That’s also when the first efforts were made to profile potential users of the site by discussing with other departments who was likely to visit the site and why”.
Expansion of the E-Commerce Initiative
In early 1996, Sam Butler became interested in expanding Internet use at Seaboard. Butler had owned a seat at the exchange for 27 years. He was also an entrepreneur, having started several highly successful small financial service firms during that same timeframe. In each case, once Butler had built up his business, he “sold the company and moved on to new challenges”. Butler describes himself as “someone who likes to read about technology, and experiment with new ideas”. In the 1980s when Seaboard’s statistical analysis was partially hand-generated and partially produced by the mainframe in the form of monumental stacks of computer printouts, Butler began doing his own numbercrunching through PC spreadsheet software. When Mosaic and GUI interfaces became prominent in 1995, Butler decided to replace the BBS that he had created in 1993, providing Seaboard’s members with a simpler GUI interface to his online data. Using an $88,000 budget that he had been given by the exchange president, Butler hired one technician, Chester Bromwell to work by his side. Butler then spent the remainder of his seed money on some small hardware purchases. For most of his hardware needs, Butler refurbished Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
272 Knight, Steinbach, & Graf
old equipment being discarded by other exchange departments. Most of his software, including the Apache Web server, was shareware or freeware that he downloaded from the Internet. Since there was no available office space, Butler and Bromwell set up shop in a hallway on the path to the restrooms. Butler correctly reasoned that this location would give him contact with most IT employees at least once a day. Butler’s project generated widespread interest among IT personnel. It was not unusual to find programmers and networking experts standing or even sitting on the floor in the hall, chatting with Butler and Bromwell about Internet technologies and the future of the Internet at Seaboard. Fields recalls appreciating the hallway’s synergistic generation of ideas and encouraging participation by his staff, “as long as their regular duties were attended to”. Fields’ boss, Vance Fernandez, has a similar recollection. “It was a time of great excitement in the industry, and Sam brought that excitement to our area. There still was no real member interest in the Internet or its potential for electronic trading, outside of Sam of course, so I couldn’t officially support such development, but I was willing to do what I could unofficially. As a whole, the IT line staff were as curious as Sam about the new technology, and freely gave their time and expertise to Sam’s project. There was great excitement and positive energy in Sam’s hallway”. The one exception was Butler’s disagreement with the Director of Application Development concerning the Planned System Development Process. In Butler’s words, “The PSDP was and still is totally unworkable. No wonder those people in IT can never seem to get anything implemented. I didn’t have time for that nonsense, so frankly I just didn’t do it. We didn’t need plans or approvals. We needed action”. In mid-1996, Butler and Bromwell implemented StockScene.com. The site was designed to provide members with time-delayed quotes, individualized trading reports and a chat room. In addition, it included content from ten outside industry sources. Revenue was generated from advertising, coupled with online sales of current exchange information to the members and the public. Butler did not see security as a major issue since many of his users were members. He selected Netscape’s Credit Card System™ for payment because it was easy to use and readily available. As Internet technology advanced, so did Butler’s ambition for StockScene. In late 1996, he built “SS (Seaboard Stock Exchange) Internet Radio” with Internet-only access. As StockScene.com grew in importance, other departments at Seaboard began to take notice. By late 1996, the Legal Department began setting policies governing the sale of real-time exchange data, and it also set up an approval process for advertisements on any exchange site. In Butler’s words, “Legal was killing the advertising on the site. By the time they approved an ad, the client had lost interest in placing it on our site”. About the same time, the Marketing Department became concerned about the wisdom of providing free data to members when similar data was being sold to third parties who then repackaged and marketed it, sometimes to the same members. In Butler’s view, “I had built a great system for member use, and now these staff departments were nitpicking it. It became increasingly difficult to get anything through the approvals. When I couldn’t innovate anymore, I lost interest and turned my site over to the IT Department”. At the start of 1997, Fields’ Web developers took over the maintenance and further development of Butler’s project. While Butler had been developing his Web site throughout 1996, Fields’ Web development initiative had also moved forward. In early 1996, Karen Greene transferred into the microcomputer support group from a Seaboard user department where she had Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
273
automated some traditionally manual functions using Macintosh computers. While Greene’s job was to maintain approximately 250-300 Macintoshes for Seaboard, she found that “things pretty much ran themselves and I had a lot of free time”. Greene used the free time to teach herself about Internet tools and technologies. “The other microcomputer technician and I were just experimenting without any official approval, although Roger (Fields) did know what we were doing”. Greene describes the site developed at that time as “primarily cat and baby pictures, but we were learning the technology”. Greene’s first Web page was a list of all the departments at Seaboard, which she produced using WordPerfect. Although she was aware of Seaboard’s earlier Web development efforts, Greene sees those primarily as either consumer-oriented or memberoriented, and views her first page as “the beginning of Seaboard’s intranet, the first Seaboard site really designed for regular employee use”. As Greene’s internal site grew, it began to attract the attention of others elsewhere in the organization. Paula Reese, Vice President of Corporate Communications, and Greene began communicating about the potential of Internet technologies. Greene told Reese that she did not think that either the intranet that Greene had developed or the earlier consumer-oriented site looked sufficiently professional. Reese readily agreed with Greene’s evaluation. Reese’s vision was to create a brand image for Seaboard on the Internet. According to Reese, “I wanted to reach out to those who don’t understand the stock market. In my mind, visual appeal and ease of navigation were as important to less informed users as informational content”. Together, Reese and Greene began to finetune the public site, SeaboardStocks.com, for both content and presentation. Reese was not content with improving the look of the consumer site. In her words, “My vision was to create a one-stop-shop for market information. I wanted Seaboard to be the first place consumers thought of when they wanted market information. So, I started to build relationships with search engines, journals, and newsletters, with the goal of enticing them to link to Seaboard on their sites”. This approach caught the attention of Seaboard’s other senior-level management. Largely because of Reese’s efforts, in late 1996 Greene was promoted to Manager of Web Development, a newly created full-time position. She reported directly to Fields, and was expected to develop an annual strategic plan for the site, prepare a budget and supervise the other Web developer. At first, Greene’s activities still centered on begging other departments for content, but over the next several years, this situation changed. As Figure 7 shows, Internet usage was steadily rising, and Seaboard employees also increased use of their intranet during this time. By 1998, Greene obtained permission to hire an experienced graphic artist and Web designer as the second member of her team.
Increasing Formalization
In 1998, interested employees formed a Web Initiative Committee (WIC). The committee, composed entirely of volunteer staff members, was considered unofficial and, since it was not composed of members, was not part of the governing structure of the exchange. The size of the committee varied from 10 to 15 staff members, depending on who opted to attend meetings. The group’s focus was to discuss new items of interest in the industry and how Web technology might be employed in those new areas, as well as how new Internet technologies might impact the industry and the exchange’s Web presence. The group began to formulate strategy and set policies as Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
274 Knight, Steinbach, & Graf
Figure 7. Internet host growth Internet Domain Survey Host Count Source: Internet Software Consortium (http://www.isc.org/) 120,000,000 100,000,000 Old Domain Survey Adjusted Count New Domain Survey
80,000,000 60,000,000 40,000,000 20,000,000 0
Jan- Jan- Jan- Jan- Jan- Jan- Jan- Jan- Jan- Jan- Jan91 92 93 94 95 96 97 98 99 00 01
Figure 8. Web system development process
PreProduction
Work Order Request Form Site Assessment Rough Schedule Request for Client Design Specs Creative Strategy Formulation Present Creative
Creative Production
Technical Guidelines: Creative Production Flow Chart Creative Sign-Off Requestor Approves Creative
Technical/ Final Steps
Technical Guidelines: Hosting Coding Report Quality Assurance Report Legal Review & Approval Publish Site
PostProduction
Maintenance Analysis Report Final Site Review
seemed appropriate based upon the results of their discussions. As Greene recalls, “The only area we really steered clear of was electronic trading, which is really up to the members here. Other than that, we examined all aspects of the business and what the Internet could do for us”. Slowly, Sam Butler’s “shoot from the hip” development methodology was modified. Over the period from 1996 to 1998, a new development methodology emerged for use within Greene’s area. Small projects were still done on request, with no real documentation or approval process, but larger ones began to follow a new procedure, as outlined in Figure 8. Fields later saw the new procedure as a version of the PSDP, “modified for Internet time”. By 1997, three sites, SSManifest.com (the staff intranet), StockScene.com (a members-only site) and SeaboardStocks.com (the public site) were being maintained separately, each with its own “look and feel”. The three sites generated a combined total Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
275
of more than one million hits per day, and according to Greene, “Content management was becoming unwieldy. Much of our subject matter was identical on all three sites, yet changes were cumbersome because they had to be done separately for each of the sites”. Throughout the second half of 1998, Reese and Fields worked, with the support of Fernandez, to gain a line item in the 1999 budget for content management and workflow software for Greene’s group (Knorr, 2000). Ownership of the project to convert Seaboard’s Web sites to content management software ultimately fell to the marketing area, through a series of events described next.
An E-Commerce Business Plan
In 1999, two different approaches were taken to Seaboard’s e-commerce business plan, one through the IT Department, and one originating with the Board of Directors and led by the Marketing Department. In the IT area, Seaboard’s network infrastructure was being outsourced to a leading provider of Internet infrastructures. As part of the agreement for network services, this firm bundled in e-commerce consulting services. Fields was highly instrumental in negotiating this contract, and anxious to work closely with the well-regarded consultants. A subgroup of the Web Initiative Committee, led by Fields and Greene, began an in-depth study of the organization’s e-commerce opportunities. They began examining all aspects of Seaboard’s business, not just the competitive marketplace, but also the internal value chain, for opportunities to use Internet technologies for long-range competitive advantage (Rayport & Sviokla, 1995). As possible projects were identified, they were rated in terms of potential value and ease of installation, and prioritized. As of mid-2000, this study was continuing. Meanwhile, by mid-1999, senior-level management was becoming increasingly interested in the impact of technology. As Vance Fernandez describes it, “The Board’s primary technology concern was, as it had been since the 1960s, the issue of support for the trading floor”. Nonetheless, the Board of Directors was aware of the impact of electronic trading on other exchanges. They issued an official statement that, “Although competitors’ electronic trading has reduced trading volume for some other exchanges in different market segments, we do not believe that electronic trading by other exchanges will negatively affect Seaboard’s volume in the foreseeable future”. The board also addressed Seaboard’s growing Internet presence by forming a committee of members to investigate the wisdom of making data available free of charge on the Internet. As one board member explained, “How can we expect to continue to sell this data to third-party repackagers when we are giving it away free to their customers?” Several members, who had not shown any interest in the Internet previously, questioned the Board of Directors in mid-1999 about future plans. Referring to articles in the popular press (see Figure 5), they wanted to know what Seaboard’s Internet business plan was. In response to this query, the Board determined that an “e-commerce business plan” for the exchange should be developed. Both budget and staff resources were provided for this effort, which was tied to Reese and Fields’ initiative to gain approval for content management software. At this point, Seaboard’s Marketing Department became very interested in ownership of the project. In the words of Todd Lawson, a manager in the Marketing Department, “We in marketing saw the potential of the site, not for just simple corporate communications, but for marketing. Besides, we knew the federal regulations that the site needed to meet and corporate communications didn’t. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
276 Knight, Steinbach, & Graf
We were the right ones for the job”. Lawson was placed in charge of the emerging project to align the three sites and bring them together under a single content management software umbrella. Lawson took the initiative in forming a second subgroup of the Web Initiative Committee, the Web Marketing Group (WMG). This group comprised mainly of marketing, marketing research and customer service staff. In Lawson’s words, “We decided that if we were going to move forward, then we needed a smaller group made up only of those of us who dealt regularly with customers”. The WMG, led by Lawson, developed a project plan for the content management installation and conversion efforts. This plan was constructed without reference to any development methodology, marking the end of the Web System Development Plan’s use at Seaboard. The IT Department, including specifically Greene’s Web development area, was not involved during these initial planning stages, and corporate communications played just a minor role, since neither area was included in the WMG. However, because Greene and Reese were both involved in the WIC, they did receive periodic progress reports. In January 2000, the process of aligning the consumer, member and employee Seaboard sites began with a market research study. The Web Marketing Group held focus groups with members of the exchange, and determined that the content of the three Seaboard sites was good, but that the Web pages were arranged poorly, non-intuitive and difficult to navigate. Based on this, the WMG determined that the next step was to interview design firms. Three design firms were chosen by the WMG to present their concepts. The competing firms’ presentation sites were judged by the WMG on creativity, navigation, organization and cost. According to Lawson, “Of the three we brought in, one was too heavy on the creative side, another had no process in place. The firm we chose had the best combination of price, experience and structure”. One of the outcomes of the Marketing Department initiative was a determination that there should be cohesion between the three sites and they should all be accessible through a single gateway. This was consistent with the earlier conclusions of Reese and Fields, when they gained approval for content management software. The Web structure shown in Figure 9 was designed by the chosen Internet design firm. This firm provided the artwork and the coding for the top two layers, before delivering the site to Greene for lower level development and implementation. In July 2000, contact with the design firm ceased, and Greene’s IT area took over the production side of the project, while marketing’s Lawson retained ownership of the initiative. Greene warned Lawson that the changes in the site’s “look and feel” were dramatic, and it seemed to her that some users would feel disoriented and would complain. Lawson however was concerned primarily with long-range improvements to the site. When the new combined Web site was installed, there was considerable negative reaction from members, internal staff users and external users. Most of the complaints centered on change, along with the fact that old bookmarks no longer worked in the new navigation scheme, and the search function was not fully available yet. Over time, however, users adjusted to the new navigation and the search feature was fully implemented. Now maintenance and enhancements are determined by the WMG. Lawson and Greene keep track of outstanding maintenance projects through an electronic spreadsheet and communicate as needed through e-mail.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
277
Figure 9. Newly designed Web structure, SeaboardStocks.com Welcome & Disclaimer
Home Page
News
Headlines Press Releases
Market Information
About Us
Ticker Quotes StockScene
Tour the Floor Members History FAQ SSManifest
Events Calendar Conferences & Workshops Seminars & Educational
Library
Online Publications Regulations Links of Interest
Share Your Views Chat Central
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
Seaboard’s e-commerce innovators view the future differently. Karen Greene stresses the progress that has been made, “Look how far we have come. We have a huge Web presence, and we are making maximum internal use of Internet technologies. We have content management software, a growing Web development group with its own budget and at least recognition that we can use a different development methodology for the Web, although we don’t really have one in place right now. That is a lot of progress for a conservative organization in just seven years”. Overall, Todd Lawson agrees that much progress has been made, “The WMG has done an excellent job with the Seaboard Web presence thus far. We just need to keep developing its marketing potential, while insuring that we do not compete with our traditional product base”. Roger Fields views an e-business plan, like the one his WIC subgroup is working on, as central to the future. In Fields’ words, “We have a great opportunity here, with these consultants, to really look at the entire business from the standpoint of leveraging technology. I’d like to see this e-business plan completed and used. That is my hope for the future”. Vance Fernandez, however, thinks the emphasis on e-commerce is misplaced, “All the e-commerce development we have done is minor in impact compared to what Internet-based electronic trading could do”. As for Sam Butler, he is planning his retirement, “I want to enjoy my grandchildren. I am too old to play the role of innovator here anymore. But I worry — who will lead the technology initiatives at Seaboard in the future?” Paula Reese, no longer directly involved with the Seaboard Web site, continues to watch the Internet activity at both Seaboard and its competitors’ sites. In Reese’s words, “I believe that Seaboard’s Web site is now impressive. With added graphics, it provides
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
278 Knight, Steinbach, & Graf
users with a virtual tour of the exchange floor, timely financial information and useful financial links. At long last, the financial news industry is raving about Seaboard.com. That is very rewarding to me personally. However, when I look back over the years, I see a lot of things I’d like to change. I can’t help but believe that we could have gone faster, farther. When I consider what some of Seaboard’s competitors are doing on the Internet, I see missed opportunities. We should be expanding into retail services: e-quotes, epayments, even e-trading. I just don’t see that happening here anytime soon”. Reese has begun talking with executive placement firms.
REFERENCES
Arvedlund, E. E. (2000). New bottom line. Barron’s, 80(12), 26-27. Carroll, M., Lux, H., & Schack, J. (2000). Trading meets the millennium. Institutional Investor, 34(1), 36-53. Knorr, E. (2000). Content management crossfire. CIO Magazine. Retrieved August 7, 2001, from http://www.cio.com/archive/120100_et_pundit.html Latimore, D. (1999). Of markets and mania. Financial Executive, 15(6), 24-27. McAndrews, J., & Stefanadis, C. (2000). The emergence of electronic communications networks in the U.S. equity markets. Current Issues in Economics & Finance, 6(12), 1-5. Morgan, C., & Perkins, S. (2000). Electronic exchanges emerge transforming the equities business. Wall Street and Technology, 18(10), 72-74. Mosser, M., & Codding, J. (2000, June). Gentlemen, start your exchanges. Futures, 29(6), 76-80. Osterland, A. (1998, September 7). Electronic trading: A hue and cry in the pits. Business Week, 3594, 82. Rayport, J. F., & Sviokla, J. J. (1995). Exploiting the virtual value chain. Harvard Business Review, 73(6), 75-85. Tsang, R. (1999). Open outcry and electronic trading in futures exchanges. Bank of Canada Review, 150(3), 21-39. Wagley, J. (2000). The battle for overseas listings: The NYSE/Nasdaq duel for listings in Europe and Asia is getting interesting…and more competitive. Investment Dealers’ Digest, 16(21), 16-21. Wall Street and Technology. (1999). The great auction-issuing share and enticing IT talent. 17(11), 30-36. Retrieved August 7, 2001. Wall Street and Technology. (2000). eCommerce in the U.S. fixed income markets. Retrieved August 7, 2001, from http://www.wallstreetandtech.com /story/electronic Trading/WST20000817S0001
FURTHER READING
Beath, C. M. (1991). Supporting the information technology champion. MIS Quarterly, 15(3), 355-372. Beer, M., & Nohria, N. (2000). Cracking the code of change. Harvard Business Review, 78(3), 133-141.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Seaboard Stock Exchange’s Emerging E-Commerce Initiative
279
Bhide, A. (1994). How entrepreneurs craft strategies that work. Harvard Business Review, 72(2), 150-161. Cash, J. I., Jr. (1994). A call to disorder. InformationWeek, 498, 80. Cash, J. I., Jr. (1994). The art of the possible. InformationWeek, 504, 112. Christensen, C. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Boston, MA: Harvard Business School Press. Christensen, C., & Overdorf, M. (2000). Meeting the challenge of disruptive change. Harvard Business Review, 78(2), 66-76. Clair, C. (2000). Several exchanges becoming for-profit ventures. Pensions and Investments, 28(24), 61. Evans, P., & Wurster, T. (1999). Getting real about virtual commerce. Harvard Business Review, 77(6), 84-94. Gallaugher, J. (1999). Challenging the new conventional wisdom of net commerce strategies. Communications of the ACM, 42(7), 27-29. Ghosh, S. (1998, March/April). Making business sense of the Internet. Harvard Business Review, 76(2), 126-135. Henderson, J. C., & Venkatraman, N. (1999). Strategic alignment: Leveraging information technology for transforming organizations. IBM Systems Journal, 38(2-3), 472-484. Rahman, S. M., & Raisinghani, M. (2000). Electronic commerce: Opportunity and challenges. Hershey, PA: Idea Group Publishing. Roberts, T. L., Gibson, M. L., & Fields, K. T. (1999, July-September). System development methodology implementation: Perceived aspects of importance. Information Resources Management Journal, 12(3), 27-38. Scharl, A. (2000). Evolutionary Web development. London: Springer. Sherrell, L. B., & Chen, L. (2001, April). The W Life Cycle Model and associated methodology for corporate Web site development. Communications of the Association for Information Systems, 5(7). TradeNet. (n.d.). Retrieved August 7, 2001, from http://www.tradeweb.com/ abouttradeweb/Introduction.htm Venkatraman, N. (2000, Spring). Five steps to a dot.com strategy: How to find footing on the Web. Sloan Management Review, 41(3), 15-28. Wall Street and Technology. (2000). Evolving exchanges. 18(11), 14-20.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
280 Knight, Steinbach, & Graf
Appendix A. History of the World Wide Web Year 1989 1990 1991 1993 1994 1999
Description • Tim Berners-Lee and colleagues at CERN propose the World Wide Web. • First commercial provider of dial-up service, The World. • First version of HTML. • First widely-adopted graphical browser, Mosaic. • New York Times estimates there are 20 million Internet users. • The World Wide Web Consortium (W3C), an international group of industry and academic representatives, forms to help insure commonality. • eGlobal Report estimates there are 130.6 million Internet users.
Linda V. Knight is associate dean of DePaul University’s School of Computer Science, Telecommunications and Information Systems. She is also associate director of CTI’s Institute for E-Commerce. She conducts research and teaches in the area of e-commerce business strategy, development and implementation, and lectures on the topic of ecommerce curricula. An entrepreneur and IT consultant, she has held industry positions in IT management and quality assurance management. In addition to a PhD in computer science from DePaul University, Dr. Knight holds a BA in mathematics and an MBA, both from Dominican University. Theresa A. Steinbach is an instructor at DePaul University’s School of Computer Science, Telecommunications and Information Systems. She conducts research and teaches in the area of traditional and e-commerce systems analysis and design. As owner of an IT consulting firm, she has provided turnkey solutions for small and medium size enterprises in the financial services, municipal government, and health care industries. Ms. Steinbach is currently completing her PhD in computer science from DePaul University. She holds a BA in mathematics, an MBA in quantitative economics, and an MS in information systems from DePaul University. Diane M. Graf is a visiting assistant professor at Northern Illinois University’s College of Business, Operations Management and Information Systems Department teaching both undergraduate and graduate courses in IT management and strategy. Her past administrative positions in both education and business support her teaching and research interests in information technology and group collaboration. In addition to an EdD in management information systems from Northern Illinois University, Dr. Graf holds a BS and MS in business education from Northern Michigan University.
This case was previously published in the Annals of Cases on Information Technology, Volume 4/2002, pp. 376-389, © 2002. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Ford Mondeo: A Model T World Car? 281
Chapter XVIII
Ford Mondeo:
A Model T World Car?1 Michael J. Mol Erasmus University Rotterdam, The Netherlands
EXECUTIVE SUMMARY
This case weighs the advantages and disadvantages of going global. Ford presented its 1993 Mondeo model, sold as Mystique and Contour in North America, as a “world car”. It tried to build a single model for all markets globally to optimize scale of production. This required strong involvement from suppliers and heavy usage of new information technology. The case discusses the difficulties that needed to be overcome as well as the gains that Ford expected from the project. New technology allowed Ford to overcome most of the difficulties it had faced in earlier attempts to produce a world car. IT was flanked by major organization changes within Ford. Globalization did not spell obvious success though. While Ford may in the end have succeeded in building an almost global car, it did not necessarily build a car that was competitive in various markets. The Mondeo project resulted in an overhaul of the entire organization under the header of Ford 2000. This program put a heavy emphasis on globalization although it perhaps focused too little on international cooperation and too much on centralization. In terms of Ford’s own history, the Mondeo experience may not be called a new Model T, but does represent an important step in Ford’s transformation as a global firm.
BACKGROUND
An important stream of work in the area of international management (Prahalad & Doz, 1987; Bartlett & Ghoshal, 1989) is concerned with the location paradox: should an internationalizing firm be responsive to local circumstances or go for global integration? On the one hand global integration presents interesting business perspectives, because firms can offer a single product worldwide and use a very uniform way of organizing and producing based on standardized technology. On the other hand there are usually diverse demands being made on multinational corporations (MNCs) by their local Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
282 Mol
customers, host governments or other parties. Managing the location paradox always requires balancing between the local and global perspectives. In the most basic terms, the advantages of being global are that firms obtain advantages of scale. Imagine if there was really only one global market for a firm, for example if customers demanded precisely the same car everywhere. A firm could build one huge factory from which it could supply the entire world, one marketing center, one R&D unit and so on. The costs per unit of production would be minimal. In reality we, of course, do not have such markets, but there are certain products that benefit from being produced by international firms. Coca-Cola is a global brand and benefits from global advertising. However, the taste of the beverage varies regionally. Given that most products are not global, surely there are advantages to being local as well. These are best summarized as “being in touch” with the environment. Firms that operate locally can quicker or better react on customers, deal with local partners and governments and so on. An haute cuisine restaurant usually serves a local custom base and operates locally. Table 1 provides an overview of the advantages of being local and those of being global, as they were conceived by Prahalad and Doz (1986) in their work on the integration-responsiveness grid. All tables can be found in the Appendix. Because local and global are two countervailing forces, there will always be a tension between the two. Even for fairly global companies, there is a need to act locally (consider what actions Coca-Cola needed to take when people in Belgium got sick due to drinking it) and no local company can completely ignore the forces of internationalization. However, the consequences of this tension for management policy may not be stable over time. Depending on the extent to which firms can unite the global and the local, they are more or less successful in becoming a “transnational” firm (Bartlett & Ghoshal, 1989). Transnational firms are able to manage the local and the global simultaneously and are thus believed to be able to achieve superior performance. In this light it is interesting to investigate further the consequences of introducing new information technology to a multinational firm. Information technology is thought to be one of the key drivers of globalization. Is IT indeed the stepping stone towards becoming a more global firm? Or, alternatively, does IT simply allow a firm to manage the tensions between the global and the local better, without changing the balance between the two? This framework will be applied to the case of the Ford Mondeo,2 a car introduced by Ford in 1993 as a “world car”. Ford Motor Company barely needs any introduction. It is of course known as one of the world’s premier manufacturers of automobiles. Its cars have been sold all over the world for many decades now. Table 2 describes some of Ford’s key financial data. A world car is a single car that is sold in different parts of the world, although slight variations may be made to the model. The following three questions will guide the analysis and discussion of this case: 1.
What were the advantages of going global with its Mondeo for Ford and what barriers did it face to do so?
Obviously Ford must have thought there were important advantages attached to producing the first-ever world car. These globalization advantages will be discussed in the case in order to get an idea of the strategic motives behind this decision. On the other hand the automobile industry has always faced local constraints, for example in terms Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Ford Mondeo: A Model T World Car? 283
of traffic rules, that needed to be overcome. Therefore a delicate balance needs to be found and maintained between going global and operating locally. What kind of managerial challenge did Ford face here? 2.
Was new IT the key enabler in establishing this global production and supply structure?
A world car poses new and possibly very different demands upon the organization and technology in use by Ford. Even if the parts going into a world car and the production technology are essentially the same with an ordinary car, a new logistics and communication structure is required to produce the car. From an IT perspective it is especially interesting whether it was the new technology that helped Ford to produce globally or other factors. It has often been suggested that IT is one of the key drivers of the process of globalization. Does the Mondeo case confirm this? 3.
Has the Mondeo become the new “Model T”?
Ford attained much of its fame and present status from the highly successful Model T, a car produced at a very large scale at the beginning of the previous century. The Model T helped Ford to become by far the largest automobile assembler of the world at the time, until its demise in the late 1920s caused a severe disruption to the Ford Motor Company. The world car concept inherent to the Mondeo presented a new mass scale production innovation. Was the performance of the Mondeo good enough to call it Ford’s new Model T? Ford has always been one of the world’s largest and most international manufacturers of cars. It was founded in 1903 and first produced abroad in 1904 in Canada and expanded intercontinentally in 1911 to Manchester, England. Chandler (1964) gives a very detailed description of its early history. Ford differentiated itself from its competitors in 1908 through the unique manufacturing strategy implemented by its legendary founder, Henry Ford. Ford decided that economies of scale and a low-cost product would be the key to competitive advantage. Therefore Ford built only one model, the Model T, from 1908 onwards and attempted to do this in mass scale, low-cost production. The reason Henry Ford chose the Model T from his range of designs was that it was most suitable for mass production. The product was fully standardized. One of the innovations Ford introduced was the moving conveyor belt. Demand for the T-Ford grew rapidly, sparked by the low prices and economic growth in the United States. Ford expanded its number of assembly sites across the United States. In 1921 Ford’s Model T sold 845,000 units for a U.S. market share of 55%. Ford became a huge industrial corporation over the period, in part because it also integrated backward by acquiring coal mines, railways and steel mills. However, the Model T’s success in the end also proved to be its demise. Demand fell steeply after 1921 and in particular during 1926 and 1927 due to the lesser economic situation and increasing substitution by second-hand cars. Ironically the second-hand market was flooded by Ford’s own T model. Those consumers that bought new cars were no longer interested in the simple T-Ford model. With these lower volumes Ford was no longer able to maintain its low costs. This initiated a long rebuilding period for the Ford company, which saw its eternal rival General Motors evolve into the world’s Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
284 Mol
largest car manufacturer, which it would remain until the present day. GM’s Alfred Sloan introduced a number of managerial innovations like the divisional M-form (Chandler, 1964) that provided GM with the ability to produce multiple models and to reconfigure its organization more effectively.
SETTING THE STAGE
In more recent history Ford initiated a new model, which was also seen to be a breakthrough model. Some observers, though not Ford itself, have likened it to the TFord. When Ford Motor Company in 1992 publicly launched its plans to produce a world car, it was already its third attempt to do so. The idea behind a world car, sometimes also referred to as a global car, is that one design fits all. More in particular, the efforts by Ford have been aimed at building a car that can at least be mass-sold in both Europe and the United States, by far the largest markets for Ford. The very first attempt by the company to build one single platform that could be sold in different markets all over the globe without major modifications even dates back to 1960 (Kitchen, 1993). This was of course a time when the word globalization had not entered management vocabulary and most car producers were still mainly oriented towards their domestic markets. The project proved not very successful: some 60 days before production was to be started, the U.S. version was cancelled. The reason was that although the car was innovative, being a front-wheel drive economy car, it would also be more expensive to produce than existing larger models. A second try came in 1981 when Ford tried to sell the same Escort model all over the world (Kitchen, 1993). This time a much larger effort was undertaken to design a single model for both markets. Although the Escort in itself proved to be a marketing success, it had little to do with a world car in the end: only two minor parts were identical in the European and North American versions. These two parts were the water pump seal and the Ford oval badge, by the way. This time the main reason was that two distinct development teams had operated simultaneously on both sides of the Atlantic. Each group posed its own idiosyncratic demands. The Ford organization was still not ready at the time, so it seemed. Under what circumstances did the Ford Mondeo come onto the market? Ford was still a fairly large firm, which was present in all key markets. Especially in Europe and North America, it had established a broad presence and attained a lot of market share. Ford even was European market leader in 1984, but slipped back into fifth place around 1992, just before the introduction of Mondeo. Table 2 gives some market share information for different markets in various years. More recently, after the introduction of the Mondeo, Ford has of course grown through acquisitions. In Europe, the purchase of Volvo in the late 1990s is the most obvious example. However, over the last two decades, Ford also started to invest on a larger scale in Asia. It did so mainly through agreements with Mazda of Japan and Kia of South Korea. In April 1996 Ford even obtained effective control over Mazda. One problem related to both Mazda and Kia, though, was that they were both relatively weak players within their national systems. Kia came close to a bankruptcy in October 1997, after which the Korean government decided to nationalize the company. Mazda has been widely cited as a firm that lacks both scale and bargaining power to be an effective producer on its own. It stands only in fifth place in the ranking of automobile producers in Japan and came close to bankruptcy around 1980. Ford’s key financial data Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Ford Mondeo: A Model T World Car? 285
are contained in Table 3. They show that Ford Motor Company has grown substantially over the last 25 years, which is in large part due to the external acquisitions and the addition of rental (Hertz) and financial services.
CASE DESCRIPTION
After the 1960 and 1981 failures, Ford started its third attempt to build a world car in 1986. Using the experience of what went wrong in 1981, European and American engineers started designing a new car, under the code name CDW27. Outside suppliers were involved in the project from 1989 onwards to develop specific components and modules of the car in a joint engineering effort. Three different brand names finally emerged, the Ford Mondeo for the European market and the Ford Contour and Mercury Mystique for the North American market. Of these cars, 90% of the elements were identical, although this is hard to see from the outside where the cars appear to be different. However, certain differences remained. Seat belts and air bags had to be adapted to the local markets. Since U.S. drivers do not always wear seat belts, their cars were provided with larger air bags. European drivers had a smaller, 30-liter air bag. Ford admitted that it had to cope with different supplier processes, which made it tough to achieve the desired component commonality. Furthermore, local conditions and mandates forced a number of changes. Most of the problems arose when Ford had to reengineer the Mondeo for the North American market and encountered U.S. federal standards and market conditions. The stakes were high enough for Ford to make the success of this new car crucial. Some $6 billion were invested before it ever came into production, which is far more money than most competitors spend on a new model (the comparable Chrysler Neon cost only $1.3 billion to develop, for example). Because of the radically new concept, it is sometimes referred to as a “new Model T”, the car that brought Ford its original fame in the 1920s. In Europe, sales of the Mondeo started in 1993; the United States followed some 14 months later. The car was sold in some 76 countries all over the world, although most sales are obviously realized in the United States and Europe.
Motives
Why did Ford decide to try its luck a third time, despite the fact that nobody else in the car industry was building a world car? The answer provided by the company was a reference to its high degree of internationalization, not just in terms of sales, but also in the spread of production sites and R&D knowledge. This led Ford to the conviction that it would be beneficial to consider a global approach instead of a multi-regional or multi-domestic approach. Philip Benton, Ford’s President until December 1992, suggested, “A global company can concentrate its resources where they will be used most effectively”. So what advantages did this global structure provide the company with? Economies of scale were believed to be the first and most important reason behind the world car project. These economies were not only to be obtained in the production of the different brands, but also in their design and the sourcing of components and parts from third parties. Being able to purchase double the quantities that a normal car model Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
286 Mol
requires obviously gave Ford room for bargaining about prices. A second reason stems from the increased flexibility that Ford obtained. Both flexibility in purchasing and flexibility in production are thought to have grown. Ford can switch between locations (Europe and the U.S.) both for its own production as well as for sourcing components from suppliers. It would be easier to cover for delivery deficiencies on either side of the ocean too. Other reasons that were cited less often include achieving a global image vis-à-vis customers, creating new knowledge through a worldwide network and a reactive approach to the loss of market share in some markets. This last point raises an interesting question: did Ford decide to build a world car out of a position of weakness, or one of strength? Although Ford was still clearly the number two manufacturer of cars in the world (after its eternal rival General Motors), Toyota was starting to catch up, as were others. Furthermore, Ford had experienced some pretty bad losses, especially in 1991 when it lost almost $2.3 billion. So the reactive strategy argument seems to have some ground as well, as Ford’s position was gradually slipping. Ford felt that it needed to do something new that could again give it a competitive edge over key rivals. Since Ford still had plenty of financial and technical resources available when it embarked on the world car project, it could afford to invest in such a large project. And Ford had the advantages of a strong presence in both the North American and European markets. Ford was strong but getting weaker.
Internal Organization
The Mondeo/Contour/Mystique was built on a project basis, where both the European and North American organizations contributed to the final product. From the earlier adventures with the Escort model, Ford had learned that real integration would be important. When the Escort was designed, two different design teams from Europe and the U.S. were working on it simultaneously. As Benton put it, “When there were opportunities to deviate from the shared engineering plan, both teams made the most of them, protecting their own turf and defending their own ideas about what constituted the ‘right’ product”. Ford’s factories in Europe are concentrated mainly in Germany, the United Kingdom and Belgium. The Ford world car was assembled in three different plants, in Genk (Belgium), Kansas City (Missouri, United States) and Cuatitlan (Mexico). The European plant initially produced some 400,000 units annually and the two North American plants some 300,000 in all. So it may well be concluded that there was an even spread between the two continents. Some key components in the car were sourced internally. At the beginning of the 1990s, some 50% of components in the automobile industry were sourced internally, but this percentage decreased rapidly. One example of intra-firm sourcing for the world car was the transmission. The manual transmissions were produced in Halewood in the United Kingdom, and Cologne in Germany. The automatic transmissions came from a Ford plant in Batavia, Ohio. This points to a form of regional specialization in the sourcing network, since automatic transmissions are far more popular in the U.S. than in Europe with any new car model. Some 9% of the European Mondeo cars were equipped with automatic transmissions, a figure that was still 3% above Ford’s expectations, by the way.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Ford Mondeo: A Model T World Car? 287
Role of Outside Suppliers
Outside suppliers fulfill a key role in the project, since some $2.5 billion were spent annually by Ford on components and parts for the world car. Important issues arise on the nature of the sourcing network. First of all Ford tried to integrate the European and North American supply bases as much as possible. Albert Caspers, Ford of Europe’s chairman before the Ford 2000 program started in 1995, suggests: “The philosophy was to develop a part only once from one supplier in the world. This is the first project where we have done this”. One of the key strategies was to reduce the total number of suppliers severely. The Tempo and Topaz models that preceded the American version of the world car had over 700 different suppliers. Ford was able to reduce this number to 227, using a worldwide supply office and early sourcing. The suppliers that participated were chosen through a global search. Ford itself used the term global-capable suppliers to illustrate its requirements. The suppliers were either chosen on their past performance or on a surrogate part. Dick Fite, who was the CDW27 supply director at the time, says: “The basic management challenge was to bring the two regional supply bases in North America and Europe together to find the best of all worlds in terms of technology, quality, cost and logistic efficiency, so we could rationalize down to the fewest number of suppliers of best-of-class components on a worldwide scale”. One way of achieving this reduction that Ford used was the tiering of suppliers. At Ford in Basildon (UK), Alan Draper, exterior purchasing agent, said (back in 1993): “We have used tiering in areas like instrument panels for several years and are looking to extend the concept to other areas”. The suppliers were approached long in advance of actual production. Most of the contracts were agreed upon for a longer period of time. Many suppliers committed themselves to the project around 1989-1990. This allowed Ford enough time to discuss the car and its components extensively with the suppliers. Just-In-Time is a central element of the production of the world car, although the intercontinental suppliers could, of course, not deliver JIT. For the other supplies, there was a great perseverance in pressing suppliers to set up plants in the proximity of Genk, in the case of the European Mondeo. Ford itself did not hold any stock of components and parts in the plant as part of the JIT system. This is why many new sites were established within 30 km of Genk, delivering within the hour. They included Kautexwerke (gas tanks) and Lin Pac Ekco (interior front door trim panels), who both started production in Belgium, in the towns of Tessenderlo and Overpelt. A second group started production a little further away, such as Ryobi Aluminium Casting. The Japanese parent of this company was asked by Ford to produce transmission and clutch cases for the Mondeo. A new and successful plant was established in Carrickfergus, County Antrim, Northern Ireland. In 1994 it was heralded as the “best factory in Northern Ireland”. A third track that followed was by suppliers that were already located near Genk. Rehau, from Rehau in Germany, entered into a cooperative agreement with Arrow Molded Plastics of Circleville, Ohio. Together they developed interior scuff plates, which Rehau then produced for the Genk factory and Arrow for North American production. Finally, some European producers moved to North America to establish joint ventures there, as well as Americans coming over to Europe. The ever-present cost issue played an important role in the sourcing network of Ford. Economies of scale were an important reason to develop a world car. Ford estimated that through the higher volumes, it was able to reduce the cost of supplies by $150 a car. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
288 Mol
Since some 700,000 cars were made annually, this saved the company up to $100 million a year. The following statement by Draper neatly illustrates the cost pressure that Ford puts on its suppliers: “We are asking our suppliers to absorb all future cost increases resulting from more expensive labor, materials and overhead”. Thus these buyer-supplier relationships were not just cooperative, but contained elements of conflict too. To what extent was this sourcing network international? It involved mainly suppliers that produce in North America and Europe, although some of these suppliers originated from Japan. Of the aforementioned $2.5 billion, $140 million involved exports from Europe to North America and $260 million exports from North America to Europe. The North American share in the components of the European Mondeo was somewhere around 15%. This figure used to be in the range of 1-2% for older models, so this was a really remarkable change. This project also revealed some clear differences between supplier processes in Ford Europe and Ford North America. This created serious problems in the project: achieving maximum component commonality and quality were made much harder. On the other hand it also allowed Ford to gain insight in the peculiarities of the two parts of its organization. These two different practices provided the firm with a possibility for learning.
Information Technology
The Mondeo project posed two different kinds of demands on Ford’s information systems. First there was a need for IT to support or replace existing manual labor in the design and engineering area. This is simply a requirement in all modern production, particularly production of automobiles. Because of the increased complexity of cars, the ever-increasing technical demands and cost pressures, all car makers have introduced IT in these processes. Second, Ford was looking at ways to rapidly exchange data between different parts of the world and to support long-distance communication between its employees and with its suppliers. This was specific to the world car project because it put demands on international information exchange that were not there in a regular European or North American project. The global scale of production allowed Ford to reduce the number of times certain operations had to be performed. Two prime examples of the first kind of IT application mentioned above are structure calculations and design improvements. Ford invested in networked computers for problem solving in the body structure design. To calculate the optimal body structure, the finite element method is used nowadays. Basically the finite element method calculates what happens when pressure is put on small squares. Up to 70,000 small squares combine to form the body structure of the car. In order to make such calculations, Ford had to use a large and powerful computer. Therefore it bought a new Cray 4MP super-computer during the Mondeo project, which was located at Ford’s headquarters in Dearborn, Michigan. This computer served both the European and U.S. versions and ran for almost a year to complete all calculations. Obviously, this kind of application completely relies on computers like the Cray 4MP. The design of the car poses other problems. Fritz Mayhew, chief of North American design of Ford suggested: “An internationalism has taken over in designs and products, making it much more possible to do a global car”. In order to do that, Ford’s engineering people had to rely on standardized programs like Computer Aided Design and Computer Aided Manufacturing
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Ford Mondeo: A Model T World Car? 289
(CAD/CAM). In 1991 an international engineering team was installed in the Genk plant to prepare for the production launch of the Mondeo. This team exchanged data and pictures with other Ford engineering centers globally. CAD/CAM was the key tool used to reduce development times. The second kind of IT application mentioned above does not deal with the technical capabilities of computers, but with the ability of IT to support communication processes over longer distances and to integrate geographically remote parts of the Ford organization and its suppliers. During the Mondeo project Ford installed real-time, multi-site, simultaneous engineering and information transfer as well as a global e-mail system. Many up-front investments in facilities were made by Ford to allow for supplier involvement in product development, supply and manufacturing. This included telecommunications and computer equipment. From the earlier adventures with the Escort model, Ford had learned that real integration would be important. To achieve such integration Ford relied more heavily than in the past on information technology, like a complex video conferencing system. Prior to the launch of Mondeo production, video conferencing was already used in communications between Ford’s technical centers in Dunton, UK and Metternich, Germany. Later a transatlantic link was established. The video conferencing rooms at Dunton are booked up to 16 hours a day. John Oldfield, head of the world car program, said about the transatlantic video link: “Without video conferencing, the amount of traveling involved and the time differences would make a project like CDW27 near impossible”. To make the global engineering project viable, a worldwide communication infrastructure was needed, particularly one that would allow for sufficient communication with external suppliers. However, not everything could be solved by long-distance communication. It was necessary for the project to physically move people. Oldfield, traveled back and forth across the ocean about once a month for six years. Throughout the project there were a minimum of 35 Americans working in the European organization, mostly engineers, purchasing people and finance people. At one point the engineering team consisted of some 800 people. Ford flew hundreds of technicians back and forth across the ocean. Just before production started in Genk, Ford temporarily airlifted some 150 engineers from England and Germany to big, trailer-like mobile offices outside the Genk plant (at an estimated cost of $4 million to $6 million). Their goal was troubleshooting and solving production problems. However, Ford believed it was getting more for its money than the three new models. This includes an improved global communications network. Alex Trotman suggested in 1994 that: “...our investment is in much more than hardware. We’ve been learning a new way of doing business for the long term. I have envisaged Ford with a global organization since the late 1960s. It’s a natural evolution. Now is the right time for such a change. The tools are there — computers and communications — and we have a strong balance sheet. If you make big changes when times are difficult, expediency often takes precedence”.
CASE ANALYSIS 1.
What were the advantages of going global with its Mondeo for Ford and what barriers did it face to do so?
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
290 Mol
The advantages of going global were demonstrably there. Ford saved money by ordering larger supply quantities. Furthermore it could use the same internally produced parts, such as submissions, for the three cars on both sides of the Atlantic. The case also shows that Ford has struggled to find the balance between global integration and local activities. While the benefits of going global appeared obvious to the firm’s managers, Ford was unable to avoid duplicating structures and adapting its cars to local demand. Local regulation was one reason for adapting the cars: North America and Europe obviously differ in some respects. Different consumer tastes also contributed to the adaptations. Europeans and North Americans sometimes tend to use their cars in different ways. For example parking space is limited in most of the (older) European cities and streets can be rather narrow. North Americans often drive longer distances, thus preferring cruise control. Many Europeans prefer manual transmissions because it fits their driving style better than an automatic transmission. Thus some of the barriers to going global could not be overcome by Ford. 2.
Was new IT the key enabler in establishing this global production and supply structure?
From the case description two arguments stand out. One is that Ford could not have made the transition required for the world car without new means of information technology and communication technology. Second, these new technologies helped to overcome some of Ford’s problems, but failed to remove all of its concerns. It was still necessary to move around large numbers of people in order to deal with local production problems for example. Ford seems to have done a good job in integrating some of the technical functions involved in the project, particularly engineering and design. It is also obvious that most, if not all, of the sales efforts were localized. In fact, most consumers may not have noticed that they were buying a world car. As far as external suppliers are concerned, there is not much information on the use of IT. In historical perspective it seems that what occurred at Ford during the Mondeo project was a change of two kinds when compared with earlier experiences. First, there was information technology to allow for communication across borders, or perhaps we should say across oceans. Second, there was a conscious effort to have employees on both continents communicate with one another about the main design but also about all the details involved in getting the car produced. 3.
Has the Mondeo become the new “Model T”?
Was the performance of the world car project good enough to call this car a new Model T? Ford itself reported to be quite satisfied with the results of the world car project. Sales of the Mondeo model in Europe were quite good from the beginning, 470,000 units over the first 15 months, and it was also chosen as the European car of the year in 1994 right after it was launched. It must be admitted that the first remake of the model came rather quick though, in 1996. Table 5 provides the unit sales of the Mondeo in Europe and its market share. In the North American market, the sales were reasonable too, although the Model targeted a smaller segment from the beginning. In North America there were questions surrounding the high pricing, which caused some problems in marketing the product. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Ford Mondeo: A Model T World Car? 291
Ford itself cited the learning effects, both internally and towards suppliers, as a very positive outcome. According to Parry-Jones, the vice-president in charge of the only Europe-based vehicle center in the new Ford 2000 structure: Ford “is now a lot more comfortable with the idea of working across the major regional borders between Ford and its supply bases and between the various organizational elements within Ford”. This implies that Ford has increased its ability to conduct such global projects. As such, the company appeared to be quite satisfied with the outcomes of the projects. Although it may not have constructed a new Model T, it did set out in a new strategic direction by becoming a more global firm. External critics of the project have centered on two issues. The first is whether it is really possible to build a global car and use global suppliers. The problem is that while cost savings drive the need for a global car, there is a danger of the result being too compromised to appeal to any specific market. In other words, consumers in different countries do want special features. Ford encountered this problem for example with the cup holder, that is a standard item in the U.S., but not so in Europe. As has been mentioned before, because of local tastes and regulations, the two versions only have 90% of the elements in common. Some industry watchers have also doubted whether consumers really want a global car. They suggested that an excellent car is what consumers want. Both the Honda Accord and Toyota Camry models have been sold across continents in roughly the same versions as well. But this was not because they had been made with the idea of a global car in mind, but rather because they were built to be excellent cars. These critics suggested that an excellent car can sell globally, but a global car cannot sell without some form of excellence. On a more basic level, one can also wonder whether a car that is produced in only two regions is really global and whether sourcing almost 100% from the same two regions is really global sourcing. The second issue of criticism concerned the development time of the car. The standard that was set by most Japanese producers is two to three years. It took Ford some seven to eight years to develop the car, and even four years after outside suppliers were first involved. The $150 savings per car that were reported earlier by sourcing in larger quantities were more than offset by a $200 extra investment per car that Ford had to make in the car, following an improved standard that Nissan introduced in the European market in 1991 (including improvements in the suspension and the engine mounts). So the long development time cost Ford dearly.
CURRENT CHALLENGES Ford Beyond the Mondeo Introduction
After the introduction of the first world car, Ford decided to take the integration of regional organizations further. As part of the Ford 2000 program, it announced in 1994 that the European and North American car businesses would be merged into the division Ford Automotive Operations. The Asian and South American/Rest of World organizations were being left out for the time being. Since January 1, 1995, Ford was organized along product lines, in so-called vehicle program centers. Of these centers, four were based in the United States, whereas one was based in Europe. Each center was responsible for the worldwide design, operations and sales of a single product category. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
292 Mol
Ford was truly trying to introduce this method of global sourcing in all of its operations. A key statement of the Ford 2000 program was that Ford has “a preference for suppliers with worldwide presence and resources to support global product development and manufacturing strategies”. The Ford 2000 program also included centralizing key managerial talent. Finally, it was unclear whether the organization along products in the program vehicle centers according to Ford 2000 would be beneficial. It was reported in the Financial Times in 1996 that many motor industry bosses said “Ford has failed to take account of the risks involved in convulsive change and will suffer as a result. Others, however, argue that hesitation today will only make the inevitable task of restructuring more difficult tomorrow”. Four years later, in late 2000, reports emerged that the Ford 2000 vehicle program had resulted in a strong centralization of activities in North America. As a result, Ford was thought to have lost touch with its European consumer base, which caused a loss of market share. It was suggested (Muller, Welch, Green, Woellert, & St. Pierre, 2000) that the Ford 2000 program led to an overly centralized organization and left Ford without leadership in Europe, South America and Asia. As a remedy the new Ford CEO, Jacques Nasser, reinstalled executives for various regions in 1999. The strong point of the whole Ford 2000 operation and Nasser’s subsequent moves appears to be that development times have come down dramatically, towards the level of Ford’s main competitors.
The Internet
As far as using information technology is concerned, Ford also took major steps in introducing new tools. The explosive growth of the Internet after the introduction of the Mondeo triggered new opportunities to improve information exchange between Ford and its suppliers. Ford says that its top priorities are currently customer satisfaction and e-business. A much-publicized example is Covisint, a cooperation started by GM, Ford and DaimlerChrysler which aims to be a marketplace for the automobile industry. Much of the data infrastructure of Covisint and other initiatives is taken care of by ANX, the Auto Network Exchange. Ford has participated in ANX since 1998. ANX is a private, virtual network that connects major car makers in North America and more than 280 of their suppliers. It is used among others for design drawings, secure routing of product specifications and EDI transmissions. The advantage of ANX is that it removes existing proprietary connections between buyers and suppliers and thereby improves interchangeability. ANX is much faster than existing communication lines, reducing turnover times by 50 to 75%. This can generate large cost savings, while maintaining or improving the security of data exchange. ANX is able to cope with a large variety of data sources. While exclusive intranets or extranets induce only more connections and a larger burden of work, an open extranet like ANX decreases the number of electronic links. As the number of network members rises, so do the benefits of ANX. Ford’s usage of ANX includes CAD/CAM applications, client server applications, interactive mainframe applications and TCP/IP file transfer (for details see: http://www.anx.com/downloads/ ford.pdf). ANX and its members have been pursuing expansion outside of North America. As Joe Boyd, telecommunications analyst of Ford in Dearborn, said: “There’s the issue of international suppliers needing to get access to applications on servers back here in North America, where we need the flexibility to support ones on other continents. An international ANX would be very desirable to us”. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Ford Mondeo: A Model T World Car? 293
CONCLUSION
To what extent is Ford’s experience in trying to achieve global integration by using information technology applicable for other firms and industries? It appears that all firms that internationalize their operations at one time or the other are confronted with conflicting demands. When McDonalds, the icon of global capitalism, internationalized its operations, it soon found out that it was usually necessary to adapt its menu to local demand. Furthermore some countries had regulations that prohibited some of the practices the firm developed in the United States. The benefits of global integration are often taken for granted by internationalizing firms or industry observers. However, there is no such thing as a uniform process of globalization. One may suggest that only 10% of the European Ford Mondeo was different from the North American Ford Contour and Mercury Mystique. However, precisely this 10% raised the cost level of the car to $6 billion and delayed its introduction in North America (Smith, 1994). Even in the Internet age, the location paradox sketched out at the beginning survives. As for Ford itself it may well be concluded that the Mondeo/Mystique/Contour is a turning point in its history. The world car has fundamentally altered Ford’s approach to building cars, which used to be two different approaches, depending on where the car was built. The world car induced an organizational change, in the Ford 2000 program, aimed at globalization. While it is not said that the outcomes of this program are positive, it is an important step in redefining the car industry. Mondeo may not be a new Model T. Then again: will there ever again be a car that bears the significance for mankind that this one model did, with its 15 million units of sales? Perhaps we should forget about the capital T and simply refer to Mondeo as Ford’s “New Model T”.
REFERENCES
Anonymous. (1994, July 23). The world car: Enter the McFord. The Economist, 69. Anonymous. (1995, March 6). Managing across boundaries. Business Week, 24-30. Bartlett, C. A., & Ghoshal, S. (1989). Managing across borders: The transnational corporation. Boston: Harvard Business Press. Chandler, A. D. (1964). Giant enterprise: Ford, General Motors and the automobile industry. New York: Harcourt, Brace & World. Cleveland, R. (1996). Vive la difference: Ford’s Richard Parry-Jones relishes challenges of world cars. Ward’s Auto World, 32(3), 119. Fleischer, M. (1996, July 12). Excellence for the world. Automotive Production. Ford Motor Company. (1976). 1975 annual report. Dearborn, MI: Ford Motor Company. Ford Motor Company. (1981). 1980 annual report. Dearborn, MI: Ford Motor Company. Ford Motor Company. (1986). 1985 annual report. Dearborn, MI: Ford Motor Company. Ford Motor Company. (1991). 1990 annual report. Dearborn, MI: Ford Motor Company. Ford Motor Company. (1995). 1994 annual report. Dearborn, MI: Ford Motor Company. Ford Motor Company. (1996). 1995 annual report. Dearborn, MI: Ford Motor Company. Ford Motor Company. (1998). 1997 annual report. Dearborn, MI: Ford Motor Company. Ford Motor Company. (1999). 1998 annual report. Dearborn, MI: Ford Motor Company. Ford Motor Company. (2000). 1999 annual report. Dearborn, MI: Ford Motor Company. Kitchen, S. (1993, March 15). Will the third time be the charm? Forbes, 54. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
294 Mol
Mapleston, P. (1993). World car highlights shift in supply relationships. Modern Plastics, 70(10), 52-57. Mol, M. J., & Koppius, O. R. (2002). Information technology and the internationalization of the firm, Journal of Global Information Management, 10(4), 44-60. Muller, J., Welch, D., Green, J., Woellert, L., & St. Pierre, N. (2000, September 18). Ford: A crisis of confidence. Business Week. Prahalad, C. K., & Doz, Y. L. (1987). The multinational mission: Balancing local demands and global vision. New York: The Free Press. Smith, D. C. (1994). Kansas City here I come…Ford’s ‘global car’ team gets set for job 1. Ward’s Auto World, 30(7), 90-92. Stevens, T. (1995, March 6). Managing across boundaries. Industry Week, 24-30. Vasilash, G. S. (1994, September). Ford launches its global cars. Production, 62-64.
FURTHER READING
Anonymous. (1994, July 23). The world car: Enter the McFord. The Economist, 69. Bartlett, C. A., & Ghoshal, S. (1989). Managing across borders: The transnational corporation. Boston, MA: Harvard Business Press. Cavaye, A. L. M. (1998). An exploratory study investigating transnational information systems. Journal of Strategic Information Systems, 7, 17-35. Cleveland, R. (1996). Vive la difference: Ford’s Richard Parry-Jones relishes challenges of world cars. Ward’s Auto World, 32(3), 119. Mapleston, P. (1993). World car highlights shift in supply relationships. Modern Plastics, 70(10), 52-57. Mol, M. J., & Koppius, O. R. (2002). Information technology and the internationalization of the firm. Journal of Global Information Management, 10(4), 44-60. Prahalad, C. K., & Doz, Y. L. (1987). The multinational mission: Balancing local demands and global vision. New York: The Free Press. Stevens, T. (1995, March 6). Managing across boundaries. Industry Week, 24-30. Threlkel, M. S., & Kavan, B. C. (1999, December 14). From traditional EDI to Internetbased EDI: Managerial considerations. Journal of Information Technology, 347360.
WEB RESOURCES (FEBRUARY 2001)
http://www.ai-online.com/news/120500FordMondeo.htm http://www.anx.com http://www.anx.com/downloads/ford.pdf http://www.covisint.com/ http://www.ford.com/ http://www.just-auto.com/features_print.asp?art=305
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Ford Mondeo: A Model T World Car? 295
ENDNOTES
1
2
The conceptual base of this case study is described in much more detail in Mol and Koppius (forthcoming). The Mondeo was the European version of the car. The North American names are Mystique and Contour. Because the Mondeo was built in the largest quantities, produced and sold earlier, it is generally referred to as “the world car” by the business press but also by Ford itself. In the remainder of the text, the name Mondeo will be used to designate the entire world car project (including the North American models).
Appendix Table 1. The advantages of global integration and local responsiveness (adapted from Prahalad & Doz, 1987, pp. 18-21) Pressures for Global Integration
Pressures for Local Responsiveness
Multinational customers are important
Customer needs differ
Multinational competitors are present
Distribution channels vary across countries Substitutes available and product must be adapted Local competitors important in market structure Multiple host government demands
Investment intensity is high Technology intensity is high There is a high need for cost reduction Universal market needs Access to raw materials and energy is limited
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
296 Mol
Table 2. Key data for Ford Motor Company, 1975-1999 Sales North America (thousands of units) Sales rest of world (thousands of units) Total sales (millions of US $) Net income (millions of US $) Total employees (numbers) U.S. Employees (numbers)
1975 3,072
1980 2,457
1985 3,237
1990 3,284
1995 3,993
1999 4,787
1,618
1,969
2,397
2,588
2,613
2,433
24,009
37,086
52,774
323
- 1,543
2,515
97,650 110,496 162,558 99
4,139
7,237
416,120 426,735 369,300 370,400 346,990 364,550 203,691 189,917 172,200 180,900 185,960 173,064
Source: Ford Motor Company, Annual Reports 1975, 1980, 1985, 1990, 1995 and 1999. Note: Accounting changes may have occurred over this period. Later years include more revenues and income from services. A net loss is signified by - (1980 only). Ford is currently divided in two sectors: automotive and (financial) services. In services key brand names are Hertz and Kwikfit. In automotive Ford owns not only the Ford brand, but also Volvo, Mazda, Lincoln, Land Rover, Jaguar, Aston Martin and Mercury.
Table 3. Ford market share between start of development of Mondeo and right after its launch 1985 1995
U.S. 19.0% 25.6%
Canada 17.0%
Germany 10.9%
U.K. 26.6%
Europe 8.3 11.9%
World 13.7% 13.3%
Source: Ford Motor Company, Annual Reports 1985, 1995. Note: For 1985, Europe includes all European markets other than Germany and the United Kingdom. For 1995, its refers to Europe as a whole, including Germany and the UK.
Table 4. Short summary of events and their outcome Year 1960 1981 1986 1989 1993 1994 1995 1999
Event First attempt to build a world car Second attempt, Ford Escort Third attempt is started Supplier involvement starts Production and sales in Europe Production and sales in U.S. Ford 2000 program Ford 2000 program fails
Outcome American version is never produced Two versions differ completely One U.S.-European engineering team Many components developed together European and U.S. operations integrated Regional executives re-appointed
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Ford Mondeo: A Model T World Car? 297
Table 5. Number of units sold by the original Mondeo Model and its European market share in the medium-sized car segment 1993 317,765 10.1%
1994 380,083 13.3%
1995 353,769 13.0%
1996 323,727 11.3%
1997 331,003 10.8%
1998 317,843 9.7%
1999 231,943 7.4%
Michael J. Mol is a PhD candidate at Rotterdam School of Management, Erasmus University Rotterdam in The Netherlands. He has been a visitor to Temple University in the U.S., WU Wien in Austria, and TU Berlin in Germany. His core research focuses on global sourcing strategy. Related research interests are outsourcing, buyersupplier relations, the influence of the Internet on these relations, and the development of business in the European Union. He has taught several courses, including European Business and Global Sourcing Strategy. He has published one book, two book chapters and three journal articles, including work in the Journal of Global Information Management and the Academy of Management Executive. He has been a consultant to the UN’s International Trade Center (ITC) and various business organizations.
This case was previously published in F. Tan (Ed.), Cases on Global IT Applications and Management: Successes and Pitfalls, pp. 69-89, © 2002.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
298 McCready & Doswell
Chapter XIX
Network Implementation Project in the State Sector in Scotland: The Influence of Social and Organizational Factors Ann McCready Glasgow Caledonian University, Scotland Andrew Doswell Glasgow Caledonian University, Scotland
EXECUTIVE SUMMARY
This case study, about the introduction of networked PCs in a local government office in Perth, Scotland, focuses on the importance of organizational and social factors during the implementation process. The implementation of the network in this case study is not a straightforward progression from one stage to the other, as may be inferred from the systems development life cycle “waterfall” model but a circular, stop-and-start process with moves back to previous stages and is more like a “spiral” approach of dynamic and unfolding processes. The case study highlights the links between technical and nontechnical aspects of implementation and the complicated process of project management in which a balance is continually being sought between technical and nontechnical issues. But although social processes may reduce technical as well as social problems, not all problems can be solved by attention to social factors. Organizational constraints may limit the success of the implementation process, and there are also dangers in including users who, if their views are disregarded, may become disillusioned and adversely affect future development of the network. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Network Implementation Project in the State Sector in Scotland 299
NETWORK IMPLEMENTATION PROJECT
The traditional implementation process is usually depicted as a logical step-by-step process, such as the system development life cycle waterfall model (Gordon & Gordon, 1999). This approach may not be an accurate representation of the implementation of computer networks which may be less structured and require a circular, reiterative approach (Greil, 1982), regressing perhaps to previous stages (Dawson, 1994), and characterized more as a spiral (Gordon & Gordon, 1999). Analysis of a number of methodologies for the implementation of computer systems suggests also that an understanding of the internal organizational environment and high involvement of the intended system users are required during the implementation of computer networks. Problems and solutions, it has been argued, cannot be definitively stated or solved. They are situationally and socially constructed, ill-defined and emerge during the implementation process. New approaches are therefore required for the design and development of organizational information systems (Gasson, 1998). Project management may therefore require managers with flexibility and good technical and leadership skills. But project managers and their planning and control techniques have also been criticized because they often focus too much on IT costs and time targets, and assume that users will do whatever is necessary (Earl, 1992). There is evidence that a large majority of systems fail because of social rather than technical problems (KPMG, 1990). This view is supported by studies (Roberts & Barrar, 1992; Hirschheim et al., 1991; Beath, 1991) which indicate that the success of systems implementation is influenced by social factors. More emphasis should therefore be placed on organizational context (Doherty & King, 1998) which may change during implementation, affecting power and control relationships. The dominant groups may be challenged by other stakeholders who wish to advance their interests at critical stages (McLoughlin, 1999). There should also be more emphasis on users, who should be actively involved in the implementation process by contributing more about their requirements, and by participating in project teams, pilot groups and vendor presentations (Damodaran 1996). Involvement of users may, however, not cover all human issues, nor solve all problems (Hornby et al., 1992). The following case study was therefore carried out to investigate the planning, management and implementation of a computer network in a regional development agency in Perth, Scotland, and to assess the role of organizational, political and social factors and their contribution to the success of the implementation. The major issues included in the case study are shown in Table 1.
Background
The Perth Development Agency is one of 13 local agencies reporting to the agency’s Head Office in Edinburgh. The agencies are concerned with encouraging and developing business in Scotland. The agencies form a quango. Quangos, quasi-autonomous non-governmental organizations, are organized and funded within the state sector, yet have considerable day-to-day independence. Quangos implement government policy, thus freeing government departments to look at broader issues of policy. Quangos are not accountable to Parliament, and are thus outside direct democratic control. Although government funded and staffed they do not function like the traditional concept of government civil-servant organizations. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
300 McCready & Doswell
Table 1. Major issues of case study MAJOR ISSUES The planning, management and implementation of a network Organizational and social issues involved in implementation, in addition to the technical issues: Organizational culture Organizational conflict and power Organizational change Systems development
The agency’s Head Office has an Information Directorate consisting of two main divisions. The Data Processing Division’s concern is the mainframe which provides the financial and accounting software used for project control and communications access to the outside world. In the recent past this division trialed a mainframe-based office system which failed partially because the trial group was badly selected (the members were the agency’s directors who had little need to communicate with each other and who did not consider it appropriate to use a keyboard) and partially because the application itself was not very good. The other division is the Office Systems Division which developed from the central typing pool and used shared resource minicomputers to provide localized word processing. This division is now interested in developing a system of PCs linked in a local area network. Central government is enthusiastic about using IT solutions to improve service quality while reducing running costs by cutting staff numbers. The main business aims of the Perth Development Agency are to: 1. 2. 3. 4. 5. 6.
Address the decline in business in the region. Assist with the stabilization of some businesses. Enhance the performance of businesses with real growth potential. Attract new business to the area. Maintain and improve the quality of the business environment. Ensure an adequate supply of land and buildings for business development.
The key business sectors with which the Perth Development Agency are involved are tourism, food, and manufacturing.
Setting the Stage
The Perth office, a small office of 25 people in a three-floor and basement turn-ofthe-century house converted to office use, is headed by a regional director who has overall responsibility. The office is structured hierarchically into three divisions: Projects, Business Development, and Property Divisions. In addition there are also Legal and Financial divisions which support the operating divisions. Each division has a division manager and a number of professional staff who work on particular projects (Figure 1).
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Network Implementation Project in the State Sector in Scotland 301
Figure 1. Organization chart, Perth REGIONAL DIRECTOR - Secretary
- Secretary 1 Accountant
BUSINESS DEVELOPMENT MANAGER
PROJECTS MANAGER
FINANCE MANAGER
- Secretary
4 Professionals
- Secretary
4 Professionals
PROPERTY MANAGER
- Secretary 3 Professionals
LEGAL MANAGER
- Secretary 1 Assistant
The operating divisions work relatively independently of each other, reporting to the regional director. Within each division there are weekly “update” meetings when everyone gets together to exchange information and bring others up-to-date with the project’s status.
Case Description
Various central government changes, reflecting a desire to provide more local accountability were being implemented. The agency’s Head Office was to lose operational power to the regions whilst taking on an increased ‘enabling’ role. It was expected that these changes would increase local workload particularly in producing financial control and project reports. The regional director, recently appointed, is young, ambitious, enthusiastic, and competent and is determined to ensure the region is successful. Although he has had little experience in using networks, he feels that IT has a significant part to play in the region’s success. The current IT equipment in the regional office includes 13 standalone PCs. Nine of these are used solely for word processing, six of them being used by the director’s secretary and the divisional secretaries. Three terminals are used to access data from the mainframe in the agency’s Head Office as well as external databases. The regional director thinks the current equipment is underutilized for a number of reasons: 1. 2. 3. 4.
Lack of knowledge and training, Mismatch of hardware and software, Lack of integration between PCs and other systems, and Poor location of print facilities,
but he is also concerned about whether the current IT would be able to cope with the changes proposed and thinks that new IT would improve productivity and communication. He therefore requested action from the agency’s Head Office which decided that a study should be carried out by the Office Systems Division. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
302 McCready & Doswell
A feasibility study was carried out by an external consultant which in summary reported that a networked system would improve productivity and create a more effective working environment. This would be achieved by using the networked facilities to: • • • •
Put together reports from already existing material more easily, accurately, and attractively. Access and distribute information from the agency’s Head Office more readily (enabling better access to timely management information concerning projects and budgets). Provide more convenient communications to clients, the agency’s Head Office, and external databases. Provide better internal communications (a particular current concern was the haphazard way in which incoming telephone messages were being handled by anyone who happened to be available — written messages were being left on desks and getting mislaid).
Overall a 20% productivity gain over five years was projected. This it was calculated, using the current number of managers and professional staff employed, would be equivalent to £68,561 in the first year and £95,986 in the second year. The time saved would be spent on improving client communications and initiating and developing more new business projects. To achieve these gains it was recognized that jobs would need redefining (there would be a need for someone to take on a system manager/administrator role, and for a receptionist to handle incoming telephone calls), that staffing levels would be frozen (the regional director stated that he would need two extra staff if a network were not installed), and there would be a need for staff training. The head of the Office Systems Division realized that the equipment released from Perth could be reused at another regional office providing an upgraded facility there at essentially “no cost”. The Perth office could be connected to the agency’s Head Office at no cost (as central government would pay) and this would also allow remote management of the Perth system. The proposed design is shown in Figure 2. Senior managers in the agency’s Head Office agreed to the proposal. Responsibility for the project was given to the Office Systems Division who contracted the operational management of the project to Mhairi Macleod, an IT consultant who had previously provided the agency’s Head Office with good office systems training. Although a recent entrant into consultancy, Mhairi had been involved in introducing a network into the local university and had drive, enthusiasm, and good interpersonal skills. Mhairi was determined to make a success of her business, although her knowledge of IT systems was based largely on her few years’ experience and assistance from contacts she had developed. Tenders were invited and three possible suppliers chosen. The head of the Office Systems Division, the regional director and Mhairi Macleod agreed that staff should have a major role in selecting who would supply the equipment. The head of the Office Systems Division, Martin Peddie, was confident that the supplier CSS would be chosen. CSS had worked with the Office Systems Division for some years and had provided good service. Using CSS would reduce possible problems of incompatibility and would take advantage of CSS’s knowledge of how the agency worked. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Network Implementation Project in the State Sector in Scotland 303
Figure 2. Proposed configuration Proposed Configureation Mainframe - Edinburgh Finance and accounting data and applications Remote system management software
WIDE AREA NETWORK
Regional Minicomputer - Perth Local office facility Database and accounting software File and text retrieval
PCs (22)
Terminals (3)
PC applications Link to mini and mainframe computers
Link to mini and mainframe computers
However, after the presentations the staff did not select CSS but unanimously chose one of the other suppliers who had had no previous dealings with the agency and who did not have a local office. After discussions between the head of the Office Systems Division, the regional director and Mhairi Macleod the staff choice was overruled. Mhairi was delegated to inform the staff of this decision and at an open meeting she told all the staff in the Perth Development Agency that their choice had been overruled. Mhairi assured them that they would be allowed to choose which software was to be used. The meeting was quite tense and some staff felt that the whole selection process was a sham and a waste of their time and they anticipated that the software selection would go the same way as the hardware selection. Mhairi assured them it would not. In the following two weeks, demonstrations were given to staff of word processing, database, spreadsheet and graphics software. After the demonstrations, staff chose an office suite which was not in use at the agency’s Head Office and of which the Office Systems Division had no previous experience. Martin Peddie, Head of the Office Systems Division, Ralph Stansfield, Regional Director, and Mhairi Macleod met again to discuss the choice about which the Office Systems Division had some particular reservations regarding the file compatibility of the office suite with other applications in the Perth Development Agency. Mhairi urged that because staff had been overruled in the choice Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
304 McCready & Doswell
Figure 3. Membership of Project Group Project Group HEAD OF OFFICE SYSTEMS DIVISION EDINBURGH Martin Peddie
CSS plc HARDWARE SUPPLIER
REGIONAL DIRECTOR OF PERTH OFFICE Ralph Stansfield
PROJECT LEADER PC SERVICE Mhairi Macleod
Adam Tate, Manager, and one other
BYTE ltd. DEALER (cabling and software installation)
of hardware, it was necessary to agree to purchase the software they had chosen. This was eventually agreed. Mhairi initiated project implementation meetings. The people included are shown in Figure 3. The schedule of meetings and their major points were as follows:
January: Project Group Meeting 1
The head of the Office Systems Division, a blunt and egotistical man, outlined the goals of the project, the main one of which was to provide information at regional level in a usable way. Data ownership would, as far as possible, be with the Perth office. He emphasized the need for tight budget control and stated that he would be wanting monthly progress reports from Mhairi Macleod. It was important, he stressed, that everyone was committed to the system and they all would demonstrate this by their actions in the next few months. Action: Mhairi provided a schedule of meetings and staff training needs. The schedule outlined the plans for establishing a training room in the Perth office, a large Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Network Implementation Project in the State Sector in Scotland 305
part of the basement area equipped with five PCs and training facilities such as whiteboard and projector, installation of PCs for training, development of course material, and the allocation of resources to staff. An agreed action plan was drawn up with targets and the dates by which they had to be achieved. It was recognized that there was the possibility of some delay because cabling could only begin once builders had completed some internal reconstruction in the building which, as it was about 100 years old, could prove problematic. Thick stone walls and wooden ceilings would have to be drilled and cut through and, as well as consuming time, this would also create noise and debris.
February: Project Group Meeting 2 Reported: Most of the initial targets had been achieved but delivery of the equipment had only just begun because payment to CSS had been delayed. Martin Peddie, Head of the Office Systems Division, announced he had now left the agency’s Head Office to join a consulting firm. His first job with the consulting firm was to act as head of the Office Systems Division until the agency had decided its Information Strategy! Action: Delivery of PCs was rescheduled and final date agreed for the installation of the cabling.
March: Project Group Meeting 3 Reported: A second file server was needed temporarily while software was being installed, and since the cabling was behind schedule, some discussion took place as to whether Byte should also have responsibility for installing the second server. Action: Mhairi, the project leader, decided eventually to let Byte have the responsibility of installing the second server. The number and location of peripherals were agreed. (The regional director, Ralph Stansfield, did not attend this or subsequent meetings and attendance by CSS and Byte representatives became patchy. Privately Martin Peddie, Head of Office Systems Division, told Mhairi that he doubted whether the identified benefits would be achieved. He also intimated that now the system had achieved the go-ahead he did not expect the regional director to be particularly interested in using it himself.)
March: Project Group Meeting 4 Reported: A serious technical problem arose concerning incompatibility between the operating system on the minicomputer and the PC operating system. Action: There was disagreement between CSS and Byte as to who was responsible. CSS argued that the fault was not with the operating system software on the minicomputer, but with how it had been installed by Byte. However, the representative from Byte, Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
306 McCready & Doswell
who had not been at the meeting when the incompatibility problem was discussed, argued that it was the specification from CSS which had been faulty. Another meeting involving Martin Peddie, Mhairi Macleod, CSS, and Byte was arranged.
April: Project Group Meeting 5 Reported: The technical problem was to be overcome by eliminating the mini computer and switching to a full client/server network architecture. This would require an additional server and network operating software. (At this time client/server PC networks were starting to grow in challenging the dominance of mainframes and minis and the original mini based design was nearly a year old.) Some time had to be spent explaining the need for these changes and their effect on users to Adam Tait, the user representative. “Clever guy”, said the Byte representative after the meeting, “but doesn’t understand IT”. The mini and terminals were now redundant and no use could be found for the mini. The new configuration was as shown in Figure 4. The proposed arrangement had been tested at the agency’s Head Office and had worked.
Figure 4. New configuration New Configureation Mainframe - Edinburgh Finance and accounting data and applications Remote systems management
WIDE AREA NETWORK
2 PC Servers - Perth Network Operating System - Lancontrol Office Suite - OFFSYS Database software Accountancy software
25 PCs PC applications Software to access mainframe
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Network Implementation Project in the State Sector in Scotland 307
Mhairi suggested that to encourage system use, a receptionist should be appointed immediately to take on the telephone messaging activity and so show staff some immediate benefits from the network. Action: Proposals were agreed (with restricted mainframe access to agency’s Head Office accounts for regional director and system administrator only.) Because of the delays and changes, Mhairi arranged a meeting with all staff, first to explain what was happening and second to rekindle their enthusiasm which had been dipping because of delays in installation and various minor difficulties in getting the software working properly. The introduction of the messaging system was announced.
May: Project Group Meeting 6 Reported: Adam Tait reported that there were difficulties in getting access to the agency’s Head Office. Attendance of staff at the training sessions was patchy partly because of pressure of work but also because delays in hardware and software installation meant that staff were not able to return from training to their desks and start using the software they had been trained in. Action: Byte was to work on access problems, and the training schedule was to be revised.
June: Project Group Meeting 7 (Final Meeting)
Access to the agency’s Head Office was still not working properly. Access had been achieved for the regional director and system administrator but multiple access had not been achieved. One other problem which had arisen was that one manager refused to share his diary/calendar because he did not want his junior members of staff to know what he was doing. Because of the extra space needed for the additional server and other pieces of equipment, the regional director had taken over much of the training area as additional office space. Although the training schedule had been revised and extended, staff attendance was still patchy. Several staff had left and been replaced by new staff most of whom needed training so that there was an even greater need for training unless staff were going to be left to learn from the practice of their colleagues.
Action: Martin Peddie, Head of the Office Systems Division, to talk to the manager about his diary/calendar and to investigate the access problems. No way forward was identified for dealing with the training difficulties. Mhairi Macleod, the project leader, arranged a final meeting with the staff in the Perth office to bring them up-to-date with the last stages of the implementation process. Mhairi assured them of further assistance if necessary. In order to maintain commitment to the network some of the secretaries were asked to take the responsibility of keeping up to date with developments in specific software packages. A number of the secretaries volunteered to do this. Use of the network had been monitored throughout by Mhairi using self-reporting questionnaires. Not all staff responded and the response rate fell partly because some
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
308 McCready & Doswell
staff left for new jobs. For example, the system administrator had left for a better paid job using the skills she had acquired. Neither the regional director nor a majority of the professionals used the network communications facility frequently. One manager relied on his secretary using the system and printing items out for him. Communications with clients had not increased and internal face-to-face communications had fallen. Staff in the “operational” divisions reported that their electronic communication was limited almost exclusively to the members of their own division. Staff in the support divisions preferred to talk face-to-face to the people on whose projects they were working. Several of the comments indicated that some of the problems encountered could be overcome if staff had better knowledge of how to use the hardware and software which they might achieve by attending the training sessions. Others had identified ways in which they could work much better. Secretaries felt they were losing contact with their team members because with the introduction of the central messaging system they were handling far fewer messages and, as some team members were producing their own documents, the secretaries did not necessarily know what was happening in projects. There was also disappointment that access to systems at the agency’s Head Office was still weak and that the quality of the central information service was poor. Finally, some people felt that important pieces of software, for example project control software, needed to be installed if the system were to be any real use to them. But most people felt that access to, and processing and distribution of information was better and a 5% fall in nonproductive activity was reported although it was felt that the ‘secretarial’ content of their work had increased. The total number of staff remained at 25 although some jobs changed.
FUTURE DEVELOPMENTS
Further central government initiatives aimed at integrating and harmonizing local economic development means that the regional office will be merged with the central government’s local Skills Agency. The Skills Agency uses a completely different computer system and has an entirely different, bureaucratic, organizational culture. The regional office will have to accommodate both sets of staff while a new office is found. The new office will use an extended version of the regional office network and the Skills Agency’s computer system (just recently completed) will be scrapped.
DISCUSSION POINTS AND QUESTIONS
As with many real-life situations this case study reflects a complex messy situation (see Checkland & Holwell, 1998) raising a wide variety of questions about social, technical, and managerial factors including: project management; power, control, and individual personality; IT development and implementation; and strategic management.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Network Implementation Project in the State Sector in Scotland 309
ACKNOWLEDGMENT
We acknowledge, with thanks, the assistance of Jim Galloway in the completion of this case study.
REFERENCES
Beath, C. M. (1991, September). Supporting the information technology champion. MIS Quarterly. Callahan, R. E., & Fleenor, C. P. (1987, October). There are ways to overcome resistance to computers. The Office. Checkland, P., & Holwell, S. (1998). Information, systems and information systems. Wiley. Damodaran, L. (1996). User involvement in the systems design process — A practical guide for users. Behaviour and Information Technology, 15(6). Dawson, P. (1994). Organizational change: A processual perspective. Paul Chapman. Doherty, N. F., & King, M. (1998). The consideration of organizational issues during the systems development process: An empirical analysis. Behaviour and Information Technology, 17(1), 41-51. Earl, M. (1992). Putting IT in its place: A polemic for the nineties. Journal of Information Technology, 7, 100-108. Gasson, S. (1998, December 10-13). A social action model of situated information systems design. In Proceedings of IFIP Working Groups 8.2 and 8.6 Joint Working Conference on Information Systems: Current Issues and Future Changes (pp. 307-326), Helsinki, Finland. Gordon, J. R., & Gordon, S. R. (1999). Information systems. Forth Worth: Dryden Press. Greil, M. J. (1982). A model for implementation of an electronic administrative system within an office environment. Doctoral dissertation, University Microfilms International. Hirschheim, R., Klein, H. K., & Newman, M. (1991). Information systems development as social action: Theoretical perspective and practice. OMEGA, 19(6). Hornby, P., Clegg, C., Robson, J., MacLaren, C., Richardson, S., & O’Brien, P. (1992). Human and organizational issues in information systems development. Behaviour and Information Technology,11(3),160-174. KPMG Peat Marwick. (1990). Runaway computer systems. London: KPMG Peat Marwick. McLoughlin, I. (1999). Creative technological change. Routledge. Roberts, H. J., & Barrar, P. R. N. (1992). MRPII implementation: Key factors for success. Computer Integrated Manufacturing Systems, 5(1).
Ann McCready, BA (Hons), PhD. Ann McCready has had a career largely in education. She began teaching in high school where she spent four years. In the 1970s she was employed a number of years in industry in Germany and Brussels, after which she returned to higher education in Scotland. She joined Glasgow Caledonian University, becoming a senior lecturer in 1997. Since the early 1990s she has researched the impact
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
310 McCready & Doswell
of information technology and networks on managers and presented papers and published on this topic. Her current project is a study of stress management programs and their effectiveness. Andrew Doswell, BSc (Eng) Hons, MSc, PhD. Professor Doswell worked as a research engineer in the radar defence industry before moving into management services in first the private and then the public sector in England. Since the mid 1970s he has worked in universities in Ireland and Scotland, teaching, researching, writing articles and books, and consulting with a focus on personal computing in organizations and its effects on people. His current research is a project investigating knowledge and its use in Scottish business.
This case was previously published in Annals of Cases on Information Technology Applications and Management in Organizations, Volume 2/2000, pp. 77-90, © 2000.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementation of a Network Print Management System
311
Chapter XX
Implementation of a Network Print Management System: Lessons Learned George Kelley Morehead State University, USA Elizabeth A. Regan Morehead State University, USA C. Steven Hunt Morehead State University, USA
EXECUTIVE SUMMARY
To manage the rapidly growing demands and costs of providing campus print services, a regional university of about 9,000 students turned to a well-known outsourcing vendor. The new Network Print Management System (NPMS) replaced 35 networked printers in the library, open-access computer labs, and computer classrooms in more than 10 different buildings. The initiative had several key objectives: to increase student access to printers, to improve the quality of print services, to decrease printing costs and environmental impact, and to avoid increasing student fees. The project also sought to reduce departmental printing costs, fund technology upgrades, and reduce the burden of printer maintenance on university technology staff. This case study tells the story of the planning, analysis, design, implementation, and realization of the new system, which proved more complex than anticipated. It offers an interesting mix of perspectives, sometimes conflicting, on outsourcing implementation of new technology in a complex end-user environment.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
312 Kelley, Regan, & Hunt
BACKGROUND
In the spring of 2001, word got out to the students on a regional state university campus of about 9,000 students that the $20 per semester information technology fee for Internet access from the dorm rooms was going to be discontinued and that Internet access was no longer going to be subject to a fee. In the fall, students would also have a new ID SmartCard (SmartCard, n.d.), which they would be able to use to pay for meal plans and services such as library, laundry, vending machines, book store, copiers, and printers. The SmartCard also would allow for electronic banking, check cashing, and cash-free spending. One of the major components in the first phase of the ID SmartCard program was the implementation of a Network Print Management System (NPMS). The recommendation for implementing a print metering system was made by an internal Technology Resources Committee (TRC). The TRC sought input from a number of university groups including students. A number of problems were identified with the existing print facilities, including poor maintenance, inconsistent quality of service, lack of color printers for art programs, graphics and other specialized uses, and inadequate budgets in the academic departments for printer supplies and service. Departments also complained about wasteful use of paper and print cartridges. The university’s Information Technology Division was then directed to research solutions and evaluate vendors. After two years of analyzing alternatives and piloting various print metering systems, they selected a well-known outsourcing vendor to develop, implement, and maintain a campus-wide solution.
SETTING THE STAGE
Providing and maintaining the computer resources for students and faculty is the responsibility of the university’s Office of Information Technology. Available facilities include over 1,600 desktop and laptop computers located in classrooms, open access labs, the library, and faculty/staff offices. Most of these facilities provide print capability for which, up to this point, there was no direct charge to students or faculty. Expenses for the academic computing infrastructure are supported by a general technology fee assessed all students each semester. Campus buildings are intra-networked using ATM over optical fiber between floors and switched Ethernet over copper to the desktop. The campus has recently begun to offer wireless connectivity in some locations as well. The Office of Information Technology also has responsibility for all administrative computing and the campus Internet facilities. The Office is headed by a senior director with many years of experience at the university, who deserves considerable credit for providing a relatively high level of service on a relatively modest budget. The new SmartCard replaced an earlier Multi-Technology Automated Reader Card (MARC) ID and payment system. This older MARC card carried the bearer’s photograph and a bar code stripe on the front. At one point the bar code stripe on the front of the card was read with hand-held laser wands at the library to provide patron services. It was also equipped with a magnetic strip on the back with the user’s meal services account information. The magnetic strip on the back had eliminated the need for diners to show both identification and meal cards and freed the cashier at the point of sale (POS) from Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementation of a Network Print Management System
313
having to record the appropriate information for each diner. This allowed for prepayment, faster checkout lines, less shrinkage, and more accurate, timely reporting. The daily usage tally was produced quickly from the information provided electronically by the cards. The new card sought to expand the electronic services provided. In addition to the magnetic stripe functionality of the old MARC card, the new card contained two programmable magnetic tracks. The first track enables cash dispensing at automated teller machines from an interlocked account on the global electronic banking network. It also offers cash-free retail payment for purchases made at any network member outlet, for example a bookstore or grocery store, whether on or off campus. Arrangements were made with a regional bank for this new feature, which the old card had not supported. Use of the first track requires that the reading device have access to an outside real-time electronic transaction server and typically incurs a small transaction fee at every use of the global external banking network. The second programmable track of the new SmartCard was used to support the meal services functionality of the old MARC card. This magnetic track allowed the new card to be used on campus as a debit card for cash-free meal purchases at any eatery on campus upon balance validation against an internal account debit system. As with the old MARC, the second SmartCard track is read by having the checkout operator swipe the card at the point of sale. New account types and cost centers could have been added for the NPMS, if desired, and the information added to the user’s magnetic card strip in a similar fashion to what was already being done with the prepaid meals. In addition, on the face of the new SmartCard is a half-inch square set of 10 gold contacts, which is based on ISO 7816 specifications (ISO7816, n.d.). This gold chip set provides access to a renewable stored value integrated circuit chip. This new card chip was a rather welcome improvement over the old MARC card in the eyes of the vending machine, photocopier, and laundry machine contract vendors because it did away with the need to administer coin and contract card stations. The integrated circuit stores prepaid cash value, which is decremented by an automated self-serve reader built into the dispensing device when the card is used to vend products. This prepaid cash value integrated circuit technology has been marketed since 1979 for stand-alone vending machines that are not networked, such as soda and snack machines, laundry machines, and photocopiers (Mag Stripe, 1996). In the late summer of 2001, the university retrofitted soda and snack vending machines, the laundry machines in the student dormitories, and the photocopiers on campus to support the new SmartCard system. These card readers require that each machine be visited every few days by a service technician equipped with a laptop computer or similar device having a serial connection. With the proper access code, the technician transfers transaction data to the portable computer, and subsequently to a floppy disk. Then the floppy disk is submitted to a Card Services Center on campus, which determines the dollar amounts for vended products and credits payment to the vendor’s account.
CASE DESCRIPTION
The recommendation to implement a network print metering system was made by the internal Technology Resources Committee. They were asked to research print Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
314 Kelley, Regan, & Hunt
Figure 1. Composition of the University Technology Resources Committee (TRC) (Note 1) Function (Note 2)
Members (Note 3)
Term Ending (Note 4)
Committee Chairman
Director of Information Technology
Permanent
Academic Subcommittee Members - by College
(Vacant) (Vacant)
Administrative Subcommittee
(Vacant)
Science and Technology Science and Technology Humanities Humanities Education Education Business Business Student Seat Info. Tech. Staff Academic Affairs Staff Academic Affairs Rep. Librarian
2001 - 03 2000 - 02 2001 - 03 2000 - 02 2001 - 03 2000 - 02 2001 - 03 2000 - 02 1 year term 2000 - 02 2000 - 02 2001 - 03 2001 - 03
Members - by College Faculty (At-Large) Staff (AFS-Prof/NF) Staff AFS Rep. Director of Libraries Registrar Student Life Rep. Staff (Info. Tech) University Advancement Rep.
2001 - 03 2000 - 02 2001 - 03 2001 - 03 Permanent Permanent 2001 - 03 2001 - 03 2000 - 02
Note 1: The stated purpose of the committee is to review policies and procedures related to the University's academic and administrative information resources services and recommend appropriate changes. Note 2: The chair of the committee votes only in case of a tie. The subcommittees divide duties and responsibilities by academic and administrative information resources matters, but the whole committee makes final recommendations. Note 3: Members are selected by the Faculty Senate. Note 4: Terms end August 15 of each fiscal year.
management solutions for the open-access computer laboratories, public access points in the library, and computer classrooms. The composition of the committee is shown in Figure 1. The university’s senior director of information technology chairs the Committee. Members are elected from among the university faculty and staff and serve two-year rotations. The student seat and the student life representative seat were vacant at the time. The library had two representatives. The university’s Office of Information Technology also had a second representative in addition to the senior director. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementation of a Network Print Management System
315
The existing network printers were centrally administered via a Web browser interface (HP Install Network Printer Wizard, 2002). They also provided users the capability of printing directly to any network printer on campus. This feature was popular with faculty because it gave them the flexibility to access printers in the classrooms nearest their offices and allowed them to bypass malfunctioning printers if necessary. Various types of printers were located in all open access labs, computer classrooms, administrative offices, and faculty offices. Printer usage volumes, on which the project cost estimates were based, were reported by users based on their own best estimates. The two primary competing vendors both tested the print volume for the month of October of 2000 to estimate volume for proposal purposes. Although there were minor differences in the volume estimates due to timing differences and methods, they were fairly consistent. Actual print volumes after implementation of the network print management system, however, proved to be significantly less than the original estimates. Since the university had not systematically tracked print volumes and the associated costs in the past, they had no alternative other than to use a “best estimate”. The perception, however, was that printing costs from an overall institutional perspective were excessive and needed to be reduced. The decision process narrowed the choice to two vendors who were asked to submit detailed proposals. The winning vendor was selected in the latter part of June 2001 and was the same vendor who already supplied and serviced the photocopiers on campus. One of the factors in selecting the vendor was significant experience in installing payper-print systems in open-access computer labs at other universities and public facilities such as libraries. The software at the core of the NPMS was supplied by a third party vendor who offers two standard implementation designs. Both designs allow the organization to track, monitor, and optionally charge the users for printing. The first alternative is a Prepaid Print design, which has two options: one uses the embedded gold chip of the SmartCard; the other uses programmable magnetic stripes (Figures 2 and 3). The second alternative is a Prepaid Direct Print design (Figure 4). More details about these alternatives are provided in Figures 2, 3 and 4.
Winning Vendor Solution Design
The design selected by the university for its new NPMS was the Prepaid SmartCard (Gold Chip) configuration, as shown in Figure 2. Although this system originally was designed for standalone vending machines, it offered several advantages for the NPMS from an administrative standpoint. Students prepay for services. The university does not have to maintain student accounts and bill students. The same card can be used for all services. Cards can be issued easily to users who do not have student accounts. One major disadvantage from the student’s perspective is that if they lose their card, they lose any money value on the card. The Office of Information Technology favored this alternative because they felt it was more convenient and reliable. Both of the other print management configurations offered by the vendor (Figures 3 and 4) also would have worked in the university’s existing network environment. In fact, the Direct Print configuration would have taken better advantage of the network printer tools that were already in place but were not being used (Dean & Combs, n.d.). However, the Direct Print configuration would have required creating and maintaining Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
316 Kelley, Regan, & Hunt
Figure 2. Prepaid Gold Chip Network Print Management design, as implemented
With the Prepaid Gold Chip Card configuration of Figure 2, a Card Reader identical in functionality to those used with stand-alone on-campus vending machines is connected via a serial cable to the print release station. The Card Reader is electronically self-contained, requires its own power supply, ejects cards using a pneumatic mechanism, and costs between $1,200 and $1,500 each. When the card reader senses a newly inserted card, it queries the card's Gold Chip and displays the user's stored balance via a built-in display at the same time as it activates its serial cable connection to wake up the print release station from its programmatic dormant state. Once cycled awake, the Print Release Station queries a central networked Print Server on campus to retrieve the queue of submitted print jobs destined for the network printer under its control, and then displays the entire print queue on the Print Release Station display monitor. This process was timed at over 40 seconds. After the wake up and print job queue retrieval delay, the user has the opportunity to select his or her print job from among the queue display. The print release station sends a request to the centralized Print Server for the print job particulars and the anticipated print page count. When the user supplies the correct print job password (previously entered at the workstation) and then acknowledges the print job cost acknowledgment prompt, the print release station sends an instruction to the Card Reader via the cable serial connection to decrement the balance on the user's SmartCard. The Print Release Station then sends an instruction to the centralized print server to dispatch the print job to the nearby network printer under its control.
a centralized database for tracking student accounts, which entailed additional costs and administration. The winning vendor proposal was based on a first-dollar revenue sharing model between the university and the outsourced vendor. The vendor would charge the university a flat monthly fee. The proposal called for the displacement on a one-forone basis of about 15 different types of existing network laser printers from four different vendors with a set of three comparable models. The functionality of the new Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementation of a Network Print Management System
317
Figure 3. Alternate prepaid swipe card NPM option
The Prepaid Swipe Card Print design, illustrated in Figure 3, uses a Card Swipe Reader in line with the keyboard of the Print Release Station computer. The Card Swipe Reader reads the magnetic track on the back of the user's card that stores the user's account information. The Card Swipe Reader is the same card reader that is used in ordinary retail point of sale (POS) checkout registers and in the university's dining room checkout stations for meal plan charges. This Card Swipe Reader is in a constant active state, responds quickly to card swipes, and costs a fraction of a Gold Chip Card Reader. When the user swipes their card, this device reads the user's Print Account information from the magnetic track on the back of the user's card. The Print Account information is sent by the Print Release Station to a central Print Account Server for validation and verification of the user's prepaid account balance. A deduction is made in real time from the user's available account balance, and upon final selection by the user, the print job prints on the nearby network printer. This Prepaid Swipe Card Print configuration option has been implemented at various other universities to control printing in public access areas like libraries and open-access computer labs [Santa Clara University (SmartPrint, n.d.); Liberty University (Print Services, n.d.); Yale University (YaleRIS, n.d.); Eastern Washington University (Dean & Combs, n.d.)].
printers was matched to provide the equivalent functionality of the old displaced printers. For example, color network printers were replaced with color network printers, printers with high-capacity or oversize paper trays were replaced with models with highcapacity trays, and duplex-capable printers were replaced with duplex models. One new printer was added in an office, and a new color printer was provided for the Art Department. The breakdown of the printers displaced and added by location appears in Figure 5. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
318 Kelley, Regan, & Hunt
Figure 4. Alternate prepaid Direct Print NPM design
The Direct Print Design, illustrated in Figure 4, is much simplified over the NPMS designs of Figures 2 and 3. Hardware costs are also very significantly reduced. No print cards, card readers, or networked Print Release Stations are needed. In fact, the user does not even have to get up from the workstation to print. With Direct Print, instead of supplying an ad hoc print job name and password, as called for under Spooled Print, the user simply supplies an account and password. After these are authenticated on an account server on the network, the user's account is debited and the print job is sent directly to the printer of the user's choosing. Anyone without an account can create one themselves at the time of their first print job submission (YaleRIS, n.d.).
The contract also called for the vendor to provide four new dedicated, rack-mounted network servers to be placed in the Information Technologies computer room. This configuration of four print servers provides a very stable environment with dual power supply, fault tolerance, and dual Ethernet. It also provides for load balancing, backup, and roll over redundancy. The university was to contribute 35 print job release stations, older but serviceable computers and monitors, to be configured by the vendor. Older model PCs were recycled for this purpose, since only minimum configuration PCs were required for this dedicated use. Cost estimates for the hardware and for per-print charges appear in Figure 6. Payment for the new printers, servers, and card readers and related software and services was bundled with a fixed monthly service fee for the administration and maintenance of the new system. The payment arrangement for the NPMS required no upfront capital outlay on the part of the university. The university would simply pay the vendor a flat monthly dollar amount for a fixed period of time. Funding for the monthly Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementation of a Network Print Management System
319
Figure 5. Number of network printers by location, before and after the implementation of the NPMS (adapted from NPMS Vendor, 2001)
Location
Library
Open Access Labs
Computerized Classrooms
Office Areas
TOTALS
Number of
Printer
Computers
Type
Number of Networked Printers (Note 1) Before Implementation
After the Implementation
Maintained Internally
Maintained Maintained Internally by Vendor
8 B&W Color
3 0
0 0
3 0
B&W Color
10 1
0 0
8 2
B&W Color
21 1
0 0
19 1
B&W Color
0 0
35 2
0 0
36
37
33
200
474
1300 (est.)
1982 NETWORK PRINTERS NPM PRINT RELASE STATION PCs NPM CARD READERS
BEFORE 36
AFTER 70
BEFORE 0
AFTER 34
BEFORE 0
AFTER 34
Note 1: The printer count does not include several hundred mostly ink-jet technology low volume printers attached directly to individual faculty and staff computer workstations, or personal printers attached to personal computers owned by students in the university dormitories.
fee would come from the revenue generated by the sale of the prepaid electronic gold chips for the student ID SmartCards. Pricing was based on operating the NPMS on a cost recovery basis (see fee scale in Figure 7) with some projected excess. If the projected excess was realized, the Office of Information Technology planned to reduce print charges appropriately in the future. The winning proposal called for streamlining the printing process in the openaccess computer labs and in the library public access areas. Adding print vending to computer classrooms was not specifically mentioned in the proposal. However, the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
320 Kelley, Regan, & Hunt
Figure 6. Printer, toner, and paper cost estimates (after Wheatley, 2000) Printer Type (Note 1) B&W B&W Color
Print Speed (ppm)
Largest Paper Size Used
Duty Cycle
Printer Cost (Dollars)
(Note 2)
(Note 3)
(Note 4)
(Note 5)
21 28 5
Letter Legal Legal
75,000 130,000 50,000
$1,300 $2,000 $3,600
Toner Cost (Cents Per Page) (Note 6) 1.55 1.06 2.00
Paper Cost (Cents Per Page) (Note 7) 0.50 0.50 6.40
Total Print Cost (Cents Per Page) (Note 8) 2.92 2.74 12.0
Note 1: B & W: Black and White. All are laser printers with built-in network cards. Note 2: ppm: pages per minute. The color printer is also capable of printing 16 ppm in B&W mode. Numbers are from vendor literature. Note 3: Letter Size, 8 1/2 x 11 in, Legal Size, 8 1/2 by 14 in. Note 4: The term Duty Cycle is poorly defined in the printer industry. It is not the same as the expected life of the printer. It is used here as an indication of the number of pages the printer may be expected to print reliably per month. Numbers are those provided by the printer manufacturer. Note 5: Prices quoted for the network printers are from http://www.bbmnet.com and http://www.cdw.com. Card reader prices are estimated between $1,000 and $1,500 each. Vendor maintenance charges for the VPN estimated at $16,000 per month. Actual prices are not known. The addition of added network printer capabilities, for example higher capacity paper trays and duplex printing capabilities, are typically offered at an additional cost. Note 6: The color printer needs 4 different color cartridges, cyan, magenta, yellow, and black. Cost estimates assumes the toner cartridges lasts the vendor's quoted standard page print count and, in the case of the color printer, even consumption of all colors. Note 7: The cost of paper for black and white printing is based on the paper costs of one department incurred in 2000-2001. Color paper pricing is for 24 lb color laser print grade paper. Pricing and costs shown are estimates of the marginal costs of printing and are meant to be indicative of magnitude rather than descriptive of actual total costs. Note 8: The Total Cost assumes that the printer is thrown away after 10 printer cartridges have been used up. Long-term maintenance of printers requires the occasional purchase of maintenance kits costing as much as $650. Similar numbers have been published for other network printers (Monk, 2000).
inventory list did not differentiate between printers that were in computer classrooms and those that were in open-access areas. A list and count of printers categorized by openaccess laboratory, library, and classroom appears in Figure 5. It is not clear how or when the decision was made to extend the Technology Resources Committee directive to encompass the computer classrooms. Although the vendor had contracts with a long list of other universities, none were identified that were using the pay-per-print systems in their classrooms. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementation of a Network Print Management System
321
Figure 7. NPMS per page print charges* Print Type
First Semester Per Page Print Charges
Second Semester Per Page Print Charges
B&W, Letter Size Paper (Note 2)
$0.10
$0.07
Color, Letter Size Paper Color, Legal Size Paper (Note 3)
$1.00 $1.50
$0.60 $1.00
(Note 1)
Note 1. B & W: Black and White. Note 2: Letter Size, 8 1/2 x 11 in, Note 3: Legal Size, 8 1/2 by 14 in.
* Source: Sewell (2002)
A contract was signed in July 2001, and the implementation and daily network printer administration of the new NPMS and related infrastructure was outsourced to the selected vendor. Deployment of the new networked print servers, print release stations, card readers, and network printers, and the configuration of the software took place over three or four weeks in the late summer. A final major thrust took place in the computer classrooms late into the weekend before the start of classes on Monday, August 20, 2001. The services provided by the vendor were to include vendor software and thirdparty software updates for the network servers and the print release stations, the monitoring of printer low paper status, and servicing printers with fresh paper reams and toner cartridges. Included with the service statement was the promise of 7/24/365 support by the vendor’s “National Help Desk Support Team”, plus a full-time technical support representative on site from 8 am to 4:30 pm, Monday through Friday. The technical support representative was responsible for print system administration, including network printer status, server monitoring and maintenance, taking systems and usage readings, and reconciling data streams from the printers, the associated card readers, and the related database servers. Posted on the new printers was a service phone number for assistance. The number is answered after hours by a recording giving instructions to leave a message. Response time was usually 10 to 20 minutes during service hours. The phone number is part of the university’s internal phone system. The university assumed the risk that revenue from usage of the new print system would exceed the vendor’s fixed monthly service fee. In exchange for the fixed monthly service fee, the vendor provided the database servers, the printers, and the card-readers under a no cash-outlay model, and also assumed the risk for the maintenance of the network print system. Jonathan, the on-site technical support representative, was very knowledgeable and responsive. He commuted from the vendor office in a neighboring state, staying in town during the week, and returning home on weekends. Jonathan worked with the Office of Information Technology and departmental personnel to install the new printers and to resolve problems that arose during the early weeks of operation. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
322 Kelley, Regan, & Hunt
He also was responsible for regular maintenance, which included supplying the paper for the printers. Many of the displaced network printers had been purchased over time by individual departments. They were stocked and maintained at the departmental level with backup support from the university’s Office of Information Technology. Although most faculty and staff had low cost, low volume color inkjet printers directly attached to their desktop PCs, they also had access to the high-output networked printers in the various computer classrooms or open-access labs in their buildings. Under the new system, faculty now had to use the pay-per-print card metering system if they wanted to print to high-speed lab or classroom printers. After the new printers were installed, some departments immediately put the old printers back into service in shared office areas for use by faculty and staff. These old printers continued to be serviced by departmental staff with backup support from the Office of Information Technology on a cost recovery basis.
Impact
Although technically the new NPMS operated as planned, a number of technical and logistical problems had to be resolved in implementing the system. The timing of the introduction with the start of the fall semester created additional challenges. Faculty felt the timing was poor because many of them were unaware of the planned change, and they did not have adequate time to familiarize themselves with the new system and prepare student materials for their classes. The logistical operation of the new system in the open-access areas and in particular the classrooms created unanticipated problems. The new print management system as designed required users to first submit the print job to one of the new network print servers, to wait for the network print server to convert the file to a format suitable for printing, and then to return a request to provide a print job name and an ad-hoc job release password of the user’s choosing. This process took 30 to 45 seconds. Then the user got up from his or her work area, walked to a print job release station located next to the network printer, and fed the SmartCard into a card reader to pay for the print job prior to obtaining the printout. When the SmartCard was inserted in the card reader, it electronically decremented the value on the prepaid embedded cash chip, leaving the user with a declining balance. The card reading process took in excess of 30 seconds for validation of the card by the network server before the job could be printed. In the openaccess lab this process worked fine. In the classrooms, however, it proved problematic. For a class of 25 students printing at the end of a class at 45 seconds per student took almost 20 minutes (25 students x 45 sec = 1125 seconds/60). In practice, time delays were not the only challenge. There were also other barriers to accessing printouts from the new networked print management systems. One was the inability of end users lacking a campus ID SmartCard, for example, walk-in library patrons, special event guests, and occasional invited lecturers. For example, the instructors for a special two-day computer workshop during the first week of classes discovered that they were unable to print because they had no access card — a situation they found to be disruptive and frustrating. Without a card, queued print jobs could not to be printed and were routinely flushed from the system. Although the ID SmartCard with the embedded cash chip is issued at no cost to students, faculty, and staff, users are required to secure and maintain a pre-paid cash balance on the card cash chip by paying at one Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementation of a Network Print Management System
323
of the five designated automated locations or the Card Service Center (located in the Library Student Open Access Lab and staffed 24/7). They must then have the card with them when seeking to print. Because the card user assumes the risk of loss of value if the cash card is misplaced or stolen, the cash on the chip not being refundable, and because there is a fee for issuance of a replacement card to students, the user has an incentive to keep low balances on the card cash chip and exercise caution in carrying it with them — as they would any credit card or cash. During the day, when print problems arose, faculty could call technical support, and the representative would respond to calls within 10-20 minutes. For evening classes after 4:30, however, support was only available on an on-call basis. In some of the open labs, work-study students who formerly had been trained to troubleshoot the printers were instructed not to do anything with the new printers except to call the support line. Most units cleared their jams and stocked their own paper. They were not instructed to do so by the vendor. This may have been a departmental decision. There was a problem with a significant quantity of missing paper that led to the implementation of controls, such as stocking paper in a closed cabinet and sign-out procedures. Another unexpected issue was the slow presentation of the job naming and release password interface to the student user when sitting at the PC. When inserted in a snack vending machine, the gold chip prepaid balance on the card is read quickly, so that it is possible to have vended product in hand within five or six seconds. However, when the same card is inserted into the card reader hooked up to the print release station, the station wake cycle took over 40 seconds per user card. (This issue was later resolved and the wake cycle time was reduced by more than half.) Although this inconvenience was manageable for printing in open-access student labs, it proved impractical and unacceptable for group printing under time constraints in the classroom. In one classroom with 45 computers, it required over 30 minutes for all students to print their first examinations. Faculty members were also confronted with students who did not bring their cards to class or were out of money on their cards. Would the student fail the exam for not being properly prepared? Would the instructor pay for the printing? How should this situation be handled? Some enterprising students even offered to loan their cards to fellow students at twice the regular fee for printing! Faculty also bore the brunt of complaints from students who were upset about the new pay-per-print fees.
CURRENT CHALLENGES FACING THE ORGANIZATION
One of the academic areas affected most by the new pay-per-print systems was the Department of Information Systems. The department chair, although very supportive of the university’s decision to implement print metering, was concerned about its impact in computer classes. On October 5, 2001, the chair of the Information Systems Department wrote the provost and detailed examples of the impact of the new print management system in the conduct of classes within the department. “Based on the responses”, she wrote, “it appears that faculty members are taking a variety of approaches to modify instruction and avoid printing — both appropriate and inappropriate”. Some of the problems encountered early in the implementation of the new print management system Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
324 Kelley, Regan, & Hunt
Figure 8. Early impact in the classroom of the new NPM implementation Benefits
1. Consistent access in all open access and computer labs to quality, high-speed laser printers. 2. Availability of color printers for areas such as art and graphics. 3. Makes true cost of printing visible to the institution. 4. Significant reduction in print volumes and wasteful use of supplies and paper. 5. Provides automated accounting functions for print jobs submitted with the library and labs. 6. Allows the university to capture significant revenue associated with networkbased student printing. 7. Provides a turn-key solution in which the vendor designs, implements, manages and supports the entire solution, freeing the university technical staff to focus their attention on other projects.
Usage Problems and Complaints
1. Each student having to wait more than 20 seconds for the print job to be processed at the remote print server before being presented with a job release information screen. 2. An additional 40 second delay for card validation every time a new card is inserted in the print job release station card reader before the print job queue is made available to the user for the release of their print job to the printer, resulting in an increase from 5 to 20 minutes for the printing of work from a 45-student class. 3. Students having to physically get up and queue up at the print release station to release their print jobs. 4. Failure of documents to print entirely after the card is charged, or student sends and is charged for 8 pages and only 6 print. 5. Cards not ejecting from the card reader, or the gold contacts peeling off the card. 6. Inability of stations that are not properly logged-in to establish a connection to the vendor server to print. 7. Printers stopping and prompting the user to supply paper of a different size than available in the tray when the interface and print-anyway printer controls are disabled. 8. Repeated user station crashes and reboots from bad interaction with an antivirus software. 9. Classroom disruptions, upset students, missing print jobs, forgotten passwords for jobs submitted. 10. Shortening exams to 30 minutes from 50 to allow time to print. 11. Use of pencil and paper tests instead of hands-on computerized exams to avoid the need to print exams. 12. Having students email the assignment to the instructor for later printing in the office printer. 13. Enterprising students charging other students double to use their cards in the
appear in Figure 8. In an effort to persuade the administration to remove the new print management system from the computerized classrooms, the memo also detailed what the Information Systems Department had spent in total for paper and toner cartridges in fall, spring, and summer of 2000-2001. It listed a print total of 525,000 pages consumed to support its open-access computerized laboratory, printing for the year by faculty and staff, and classroom printing in over 225 sections of the various courses offered by the department that year in service of about 5,000 enrolled students — $4,600. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementation of a Network Print Management System
325
On October 26, 2001, the department chair informed the faculty that the Office of Information Technology had agreed to eliminate the card readers in the 45 seat computer classroom used to teach the CIS101 introductory computer course with enrollment of over 1,000 students each semester. However, the printing would continue to be done through the new network print management system so that the print jobs submitted could be counted. The Information Systems Department was to pay for the printing from its own internal funds and at the rates that would otherwise have been charged to the students. The cost estimate to the department, based on the historical average of print volumes during the prior three years, to support this one course for one year under the new print system was close to $19,000. Since the department was not funded for this expense, they sought budget assistance through the academic vice president’s office and looked for ways to reduce printing. They also proposed implementation of a lab fee for the CIS101 course to recover printing costs. On January 23, 2002, an article was published on the front page of the campus newspaper. It announced reductions in the per-page print charges for the new print management system. The article described the predicament of a student in a desktop publishing class: “When you have a 36-page magazine due in color at the end of the semester, $1 per page was really expensive. I think the decrease in price will be absolutely fabulous”. The article also included an editorial that echoed an earlier one expressing editorial dissatisfaction with the new print fees and asking the administration to seek funding venues other than students. The print fee reduction had been made known about a week earlier in an internal memo to faculty and staff. Black and white prints on standard size and weight paper were reduced from 10 to 7 cents per page, or 30%. Color prints on standard size paper were reduced from one dollar to 60 cents, or 40%. Legal size color printouts were reduced from $1.50 to one dollar, or 33%. The article quoted the director of information technology as saying that the price reduction was done with the students in mind: “Hopefully students will adapt to the reduction in cost and use our services again this semester”. The vendor’s on-campus service representative also made rounds at the university’s freshman orientation in an effort to gain support for the new system with the student body. The reason given in the article by the technology director for the decrease was that many students had sought other alternatives to printing after implementation of the new system. He also reported officials at a sister university had previously installed the same system and had cautioned him against setting prices too low. This other institution reportedly charged five cents per copy and “consistently lost money”. “If students think we’re making money on this”, he continued, “they are wrong”. The dramatic reduction in printing had put actual revenue significantly under projections and well below the level needed to recover the fixed monthly vendor maintenance fee. The pay-per-print system had caused a reduction in printing compared to the previous semester that was on the order of 85%. The Office of Information Technology and the vendor took a number of actions during the fall semester to address the problems and user concerns with the initial implementation. The long waiting time at the print release stations was significantly reduced by disabling the “sleep cycle” feature of the card readers. Despite the availability of highly responsive technical support, resolving problems and issues proved time consuming and troublesome for both the vendor and the academic departments. Unrelated problems encountered during the same period, such as corrupted files in the antiCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
326 Kelley, Regan, & Hunt
virus software and conflicts with a new version of an Internet Browser, complicated the trouble-shooting process. Starting with fall 2002, the Information Systems Department has implemented a new $28 lab fee for the CIS101 Introductory Computing course. This fee is intended to cover printing costs (in the lab where the card readers were removed) plus a new online tutorial and performance testing system, which is anticipated to help reduce the demand for printing.
ACKNOWLEDGMENTS
The authors would like to thank the network print management vendors and the university’s technology director for informative discussions and feedback throughout the development of this case.
REFERENCES
Daniel, D. (1998). Printer management moves to the wild side. Computing Canada, 24(13), 33-38. Dean, D., & Combs, C. (n.d.). Students at the center of the universe: Fostering a student focused, student guided, comprehensive service. Retrieved from the Eastern Washington University Technology And Libraries Web site: http://www.ewu.edu/ TechRes/studentcomputing/sld001.htm Hp install network printer wizard, version 3.00.075. (2002, January 21). Retrieved April 8, 2003, from the Hewlett Packard Web site: http://www.hp.com/support/net_printing ISO 7816 Parts 1-3: Asynchronous SmartCards: Summary of the ISO 7816 standard. (n.d.). Retrieved April 8, 2003, from http://www.ttfn.net/techno/smartcards/ iso7816123.html Mag stripe stored value systems. (1996). Retrieved April 8, 2003, from the Campus ID Report Web site: http://www.campusid.com/dec6.htm Monk, M. (2000). Making cents of copy and print management. Retrieved April 8, 2003, from http://www.envisionware.com/lptone/MakingCents.pdf NPMS Vendor. (2001). Network print management cost recovery system statement of work. Unpublished work. Print services policy. (n.d.). Retrieved from the Liberty University Web site: http:// www.liberty.edu/itrc/policy/printservices.html Sewell, J. (2002, January 23). Printing fees for students reduced this semester. Trail Blazer, 74(15), 1, 7. SmartCard solutions for multi-application security in today’s e-business world. (n.d.). Retrieved April 8, 2003, from the SchlumbergerSema Web site: http:// www.smartcards.net/ SmartPrint usage of public access printers. (n.d.). Retrieved April 8, 2003, from the Santa Clara University, Information Technology Web site: http://it.scu.edu/projects/ SmartPrint/smartprint.shtml
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementation of a Network Print Management System
327
Wheatley, M. (2000). Total cost of ownership: One step forward, one step back: That $600 PC really costs a whopping $10,000 a year to operate. Retrieved April 8, 2003, from http://www.itworld.com/Man/2817/ITW3487/ YaleRIS print account management. (n.d.). Retrieved April 8, 2003 from the Yale University Web site: https://www-vending.its.yale.edu/pm/print.html
Appendix A. Suggested Criteria for Evaluating an NPMS (Adapted from Monk, 2000)* Issues Related to Project Opportunity and Value 1. 2. 3.
Was there a potential to reduce wasteful use of printers, printer supplies, and paper? Was there a potential to reduce maintenance and administrative costs relative to printing? How much money was being wasted by excess printing without the NPM in place?
Issues Related to Project Economic, Organizational, and Technological Feasibility 4. 5. 6.
Can the organization cost justify the investment? Will the solution be reliable? What similar organizations can attest to the reliability? Can the vendor provide hardware and software that is compatible with existing systems?
Issues Related to Project Sponsorship 7. 8. 9.
Who approved the NPM project? Who was given the authority and responsibility to see the NPM project through? What resources (staff and budget) were committed to the NPM project, and what timetables set for its completion?
Issues Related to Project Logistics 10. 11. 12.
How long did the NPM implementation take once the contract with the vendor was signed? Who acted as the project manager for the NPM implementation? What equipment was replaced when the new NPM was put in place? What equipment became redundant?
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
328 Kelley, Regan, & Hunt
Issues Related to Project Acceptance 13. 14. 15. 16.
Can public facilities, such as the library, meet the needs of all patrons (such as nonstudents) with the NPM? How well did the classroom facilities, function with the new NPM? What adjustments had to be made? Is any special training required to support the system? Are there other components that require training to support the system on a daily, weekly, or monthly basis? Can the system support different prices per printer so that color and black and white can be charged differently?
Issues Related to Project Assessment 17. 18. 19.
How much will the need for staff intervention or support be reduced (or increased)? Would it have been more practical to provide free printing system-wide by using the NPMS controls waste without actually charging? Was the university’s money well spent? Did the university accomplish its waste reduction objectives? Its printing cost reduction objectives?
*Adapted from Monk, M. (2000). Making cents of copy and print management. Retrieved April 8, 2003, from the Web site: http://www.envisionware.com/lptone/MakingCents.pdf
Appendix B. Discussion Questions 1. 2. 3.
4. 5.
Describe the objectives given for the implementation of the Network Print Management System. How well were these objectives met? Review the Suggested Criteria For Evaluating an NPMS in Appendix A. What criteria were considered in the paper when making the decision to implement the NPMS? Were these adequate? What criteria do you think should have been used? The paper differentiates between implementing the NPMS in classroom settings, in open use laboratories, and in staff and faculty offices. Do you think it is appropriate to have implemented the NPMS in each of these settings? Why or why not? Did the implementation of the NPMS run into unanticipated issues of deployment and acceptance? What were they? How could they have been avoided or mitigated? One year after the completion of the NPMS, would your recommendation based on your evaluation of the case be to retain the NPMS as is, remove it from service, or to make modifications to the setup? Why?
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Implementation of a Network Print Management System
329
George Kelley, PhD, is an assistant professor of information systems at Morehead State University, USA. Dr. Kelley has more than 12 years of industry experience, is a member of AIS, IEEE, and IRMA, and has published in the areas of technology adoption and implementation, systems security, and intelligent software agents. Elizabeth A. Regan, PhD, is associate professor and chair of the Information Systems Department at Morehead State University, USA. Dr. Regan holds a PhD from the University of Connecticut and completed a year of additional graduate work at Colorado State University. She has also held appointments as an adjunct at New York University, and for seven years was an instructor in the University of Connecticut School of Business. She brings to the classroom 16 years of experience in industry, where she was responsible for many projects involving system design and implementation, end-user computing, knowledge management, and organizational transformation. Her research interests are primarily in the areas of information technology, innovation, and organizational change. She is the lead author on two college texts, the most recent being End-User Information Systems: Implementing Individual and Work Group Technologies (2nd ed.) (Prentice Hall, 2001). She has presented her work in numerous national and international forums and publications. C. Steven Hunt, EdD, is currently a professor of business and computer information systems in the College of Business at Morehead State University, USA. Dr. Hunt formerly worked in a dual appointment at Tennessee State University for the Center of Excellence for Information Systems and the College of Business. He has numerous refereed journal publications to his credit, including articles in the Journal of Computer Information Systems, Journal of International Information Management, Journal of Education for Business, Journal of Business and Training Education, and the Office Systems Research Journal (now the Information Technology, Learning & Performance Journal). His current research interests include group support systems, Web-based collaboration, knowledge management, and IS curriculum development. He was recently appointed to chair the 2002-2003 National Reengineering Task Force for the Organizational & End-User Information Systems (OEIS) Model Curriculum.
This case was previously published in the Annals of Cases on Information Technology, Volume 6/2004, pp. 293-311, © 2004.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
330 Iacovou
Chapter XXI
Managing the NICS Project at the Royal Canadian University Charalambos L. Iacovou Georgetown University, USA
EXECUTIVE SUMMARY
This case describes the installation of an IBM mainframe computer at the Royal Canadian University. The goal of the described project was to establish a Numerically Intensive Computing Service (NICS) in order to provide “first-class” computing facilities to the researchers. Due to a number of factors, NICS failed to meet its objectives and the university abandoned the project within the first two years of its operations. The factors that contributed to its failure include: advancements in computing technology and changes in the computing style of end users; political and other nontechnical considerations in selecting the system; and the weak and adversarial relationship between the computer center staff and the senior university administrators. These factors, with a special emphasis on organizational issues, are discussed throughout the case. At the end of the case, the reader is invited to provide solutions for managing the current failure situation and minimizing its negative consequences.
BACKGROUND The University
The Royal Canadian University (RCU)1 was established over 70 years ago. It is currently one of the largest universities in North America and employs about 2,000 faculty members in more than 100 academic departments, schools, and research centers. More than 30,000 students are currently enrolled at RCU. RCU’s annual revenue exceeds Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Managing the NICS Project at the Royal Canadian University 331
Figure 1. RCU’s organizational chart University President
VP, Academic & Provost
VP, Administration & Finance
Deans of Academic Units
VP, Student Services
VP, External Affairs
VP, Research
Director of UCC
$300 million. Provincial government subsidies and research grants account for about 85% of RCU’s revenues and student tuition constitutes the remaining 15%. RCU considers itself one of the premier research institutions in North America. Currently, the university receives about $100 million annually in research grants and contracts. About 100 spinoff companies, with more than $700 million in annual revenues, have been established by RCU to market technology and know-how generated by its researchers. RCU’s administration structure includes the president, the chancellor, the board of governors and the university senate. The president of the university is RCU’s chief executive officer and is responsible for overseeing its entire operations. The chancellor is elected by the university community and represents the university on official occasions. The 12 appointed and elected members of the board of governors are responsible for the administration of RCU’s property and revenue. The senate, which has more than 60 appointed and elected members, is responsible for the academic governance of the university. The daily operations of the university are managed by the president, five vicepresidents and 12 deans (see Figure 1). The vice-president (VP) of Academic and provost oversees the operations of the academic units of the university. The VP of Administration and Finance oversees many of the administrative departments of the university, including Finance, Human Resources, Plant Operations, Security, the Bookstores, Planning and Development, and Purchasing. The VP of External Affairs is responsible for all external university relations, fundraising and development. The VP of Research oversees the research activities of the university and manages the relationships with grant agencies and private research organizations. The VP of Student Services oversees many of the support operations of the university, including the Registrar, Athletics, Computing, Telecommunications, Housing, Libraries, and Student Services.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
332 Iacovou
History of Computing at RCU
In the mid-1950s, the university president established a committee to assess the university’s interest in “computing machines and the study of automation in general”. After a review of RCU’s needs, the committee recommended the purchase of a computer for academic use. Contributions from local organizations in exchange for future computer usage were sought to help pay for its cost. A number of local firms declined the university’s request because they did not see a reason for using such a machine! One of them even replied that “with reference to your letter of August 20th, I confess that I am unfamiliar with the electronic computer and its possible uses”. Despite this lack of awareness in the business community, RCU managed to raise $20,000 in contributions from local organizations and acquired its first computer for $60,000 in 1957. This computer was an Alwac III E, a first generation, single-user computer capable of performing 250 instructions per second. This was among the first installations of computers in Canada. As expected, the Alwac III E became very successful soon after its installation. In its first two years of operation, it was used by more than 25 university departments and 16 outside organizations. Due to the increased demand for computing services and the introduction of newer, more powerful machines, the Alwac III E was replaced in 1961 by an IBM 1620 computer. These two trends, the introduction of more powerful machines in the marketplace and the increasing demand for computing services, continued to play a key role in the university’s computer purchasing decisions for a number of years. By the early 1990s, RCU had acquired ten new machines, each providing two to 15 times the computing power of its predecessor. These frequent computer purchases were made necessary by RCU’s annual increase in mainframe usage, which is estimated to be about 20%. The arrival of the first computer in 1957 was accompanied by the creation of RCU’s first computing center (CC). The center was developed to support the computing needs of the academic community. A director (who later became the president of the university) and two computer programmers were hired to staff the center. The responsibility for overseeing the center’s operations was assigned to the provost. This center continued to operate as the sole computing facility at RCU until the mid-1960s when the university acquired its first computer to support its administrative services. At the time of the purchase, RCU established a separate data processing center (DPC) to support its nonacademic staff. DPC was placed under the responsibilities of VP of Finance. Since the establishment of DPC in the 1960s, the organization of computing services at RCU went through a number of structural changes. In 1980, the two centers were merged. In the fall of 1984, the university decided to put renewed emphasis on administrative computing and the academic and administrative computing operations were again reorganized. The administrative systems staff were moved to a new department, Information Systems Management (ISM), under the supervision of VP of Finance; the academic computing services were moved under the supervision of a newly appointed VP of Student Services (who received a mandate for improving the university’s computing and networking facilities). In 1990, after an internal review, RCU’s administration again restructured its computing services. The CC and ISM were integrated into one department, University Computing Center (UCC). At the same time, all network related services (data networking, cable plant and telephone services) were moved to a newly established Data Network and Telecommunications (DNET) Department. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Managing the NICS Project at the Royal Canadian University 333
By the early 1990s, UCC grew to include over 100 staff (three management staff, 52 programmers and analysts, 42 operations staff and seven administrative clerks). The staff was organized in several groups: Office of the Director, Academic Operating Systems, Administrative Operating Systems, Educational Services, Computer Operations, Statistics and Numerical Analysis, and Applications Support. The annual budget for UCC was about $6 million; $3.2 million were spent on salaries. Until recently, these funds were directly allocated to UCC as a line item in the university’s overall budget. UCC received input and direction from both the users and senior administrators. User input was received through a committee, the Campus Advisory Board on Computing (CABC). CABC was established in 1968 to “discuss and comment upon future plans and communicate feedback concerning the operations of the center”. The interface between UCC and senior management was implemented through a direct reporting relationship between UCC’s Director, Bob Lewis, and the VP of Student Services, Dr. John Parker. The relationship between Lewis and Parker was not a close one. This was reflective of the adversarial working relationship and politics between UCC and the university administration in general. The senior administrators viewed UCC as an auxiliary service and treated it as a cost center. Their main concern was to reduce its costs. The management of UCC was assigned to technical people who lacked the political power and leadership abilities to alter these negative perceptions and attitudes towards UCC and its services. A staff member commented on this issue: The VPs that we reported to did not have a good understanding of what was involved. They had a very high level overview of what was happening. I don’t think they had a good understanding of really what was involved in providing a computing environment for either academic or administrative computing. They were not familiar at all with the center’s operations. I think this low level of involvement was typical in industries that the computing side was still seen as black magic. Computing services were really only understood by people doing it and by key user groups because they were very aware of what was involved. I don’t think senior management ever understood our operations. Indeed, the planning, the execution, the operation of computing services was all done by the technical people alone.
SETTING THE STAGE
Until the late 1980s, UCC enjoyed a somewhat monopolist power as it was the sole provider of computing services at RCU. All computer-related funds were centrally allocated to UCC which decided which systems to implement and which services to offer to the users. This monopolist advantage of UCC, however, was eliminated by two recent changes. The first threat to the power of UCC was caused by changes in the hardware industry. Specifically, the introduction of RISC-based2 computers into the marketplace made it possible for smaller organizational units to acquire their own relatively inexpensive and powerful computers. The introduction of these new computers created quite a concern in the computing industry, which up to that point was mostly dominated by large mainframes centrally located and administered in corporate IT units. For the first time, serious departmental users of intensive computing were able to gain access to powerful Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
334 Iacovou
computing facilities without having to purchase a mainframe or subscribe to a centralized organizational computing service. However, as this new technology represented a dramatic shift from the traditional, well-accepted “big iron” approach and the RISC technology was not yet proven as a robust alternative to mainframe-based computing services, there was some uncertainty about its eventual success in the marketplace. Some computer experts felt that mainframes and supercomputers would continue to dominate the intensive-computing niche of the market; others felt that the newer RISC-based machines would be able to erode the virtual monopoly of the mainframe and push the industry towards an alternative, decentralized-model of computing. In addition to the challenges created by the introduction of the RISC machines, UCC faced additional pressures from RCU’s administrators who unilaterally decided to decentralize the allocation of computer funds. The administrators felt that this decentralization of resources and decision-making authority would improve the quality and reduce the cost of the computing services at RCU. The president’s office strongly supported these changes because “a decentralized budgetary model encourages users to make informed choices as to which type of equipment or service is most effective, desirable and affordable for their particular needs”. Under this new plan, the various academic units (instead of UCC) would receive annual allocations of computing funds. In turn, these units would have to pay UCC, based on a chargeback policy, for all the computing services they receive from it. To further increase the efficiency of its operations, UCC would be required to recover all computing-related investments through chargeback fees. This decentralization of the computer funds was gradually implemented. During the first year of this model, only 10% of the funds were allocated to the academic units. Eventually, 100% of all computer funds were allocated to them. One of RCU’s vice presidents explained the rationale behind the move to this decentralized model of computing: It was a time of change and it was not easy for anybody but with the president being as sort of strong-willed as he is, he felt, and I agreed with him, that decentralization in the longer term was the best bet. It gave individual units choice of what they wanted to do. And the argument that I used to hear was that there would be a lot of unused MIPS sitting on people’s desks if you decentralized it. So, if you take the global view of the university there is a lot of redundant capacity and therefore you can have economies of scale by having a central machine and that’s a traditional argument. But then my response to it was that if I drive on a highway I see a lot of redundancy with cars having only one passenger. And the reason why we tolerate that one passenger in a car is the individual freedom, flexibility of the people. We felt that we needed to give a similar type of flexibility to our computer users. UCC personnel expressed strong opposition to this decentralization policy partly because of its potential limiting effects on their discretion and partly because of its lack of involvement in the policy’s planning and implementation. One UCC manager commented: The decentralization process has been proceeding on an ad hoc basis. Our organization was so uncertain and the critical thing was that the fee for service transition was never outlined in a planned manner. It wasn’t clear whether the university wanted us to Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Managing the NICS Project at the Royal Canadian University 335
become fee for service, whether they were going to force us to or not. What was definitely clear was that none of our customers like the idea and the idea was never ever promoted within the university. There was no process by which the university community was involved and could buy into the idea. Overall, the transition to a decentralized, chargeback system, coupled with the availability of more powerful, smaller computers, which were being acquired by the users independently, had a negative impact on the perceived power of UCC. This is reflected in the following comment by one of RCU’s deans: The computing center was technically a very good organization that kind of lost its way in the early 1990s. At that time they were providing less and less of a service. They were becoming less and less relevant to what was going on in our department because the administrative systems were in place and people were using workstations. Computing services has not been a powerful department within the university since they started to decentralize its funding.
CASE DESCRIPTION Project Initiation
After the introduction of the chargeback rates, a number of science researchers in chemistry, physics, engineering and other disciplines began lobbying the administration of the university to increase the level of computing support that was provided to them. This group of researchers, led by a chemistry professor, demanded that the established mainframe CPU usage rates be reduced for off-peak use so that researchers with intensive computing needs could perform their computing tasks in the evening hours without draining their research budgets. In addition, they requested that the university seriously consider the purchase of a numerically intensive supercomputer for its researchers. At the time, only a couple of supercomputing facilities existed in Canada. Researchers with a need for such facilities had to independently arrange and pay for access to them. The president of RCU responded favorably to these initiatives. Off-peaks rates were drastically reduced. Most importantly, the president, who felt that “a first-class university should have first class computing facilities available to its researchers”, established a committee to evaluate the intensive computational needs of the researchers. Several researchers and CABC members were appointed to the committee, which was chaired by the VP of Student Services. After considering the needs of the researchers, the committee concluded that the university should indeed develop a large, numerically intensive computer service (NICS). The committee felt that such a facility would be pivotal in ensuring the future success of the university’s research endeavors. In response to this committee’s finding, the president created a university-wide vendor selection committee, composed by researchers and UCC staff members and chaired by two senior science professors, to identify and review candidate systems for this service and recommend a specific solution to the VP of Student Services for purchase.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
336 Iacovou
Vendor Selection
Due to the uncertainty caused by the introduction of the RISC-based machines and powerful personal computers in the marketplace, the members of the vendor selection committee had difficulties agreeing on a specific system configuration for NICS while preparing the request for proposal (RFP). Some members of the committee believed that the proposed facility should consist of a single supercomputer. Others felt that the university should acquire a number of powerful workstations and connect them using a network. As a committee member pointed out, the selection of an optimal configuration was a difficult task due to the diversity of alternatives and preferences: The workstation technology was changing very rapidly and throughout the discussions there were proponents of the workstation solution, the clustered workstation solution, the multi-processor approach as well as proponents of the very expensive supercomputer, CRAY approach. Some felt that the only solution was the purchase of a “big iron”. Others were making the decision between doing their intensive computing on a central machine and their own personal computers. They would have runs that would take perhaps days to do on a personal computer but there was no problem in terms of cost once you bought the machine — the cost is fixed. There are no problems in terms of scheduling — you didn’t have to worry about anyone else’s workload. And because the style of computing was changing, you could, for example, break a problem up into small pieces that they could run a piece overnight and come back the next morning and look at the results and continue from that point. The change in computing at the time created a big question for us.
Because the committee was unable to specify the exact architectural configuration of the potential service, it decided to let interested vendor recommend specific configurations and products. However, all members of the committee agreed (and so indicated in the RFP) that the proposed service should be a UNIX-based3 one and should be scalable so it could act as “the beginning of a more comprehensive network-based large scale computing”. The RFP was sent to 35 vendors. Thirteen of them responded to it. The vendor proposals varied greatly, both in terms of computer architectures and processing power. The proposed solutions included super-workstations, mainframes, mini-supercomputers, near-supercomputers, supercomputers and various combinations of these. For about three months, the selection committee met to discuss the submitted proposals and review their technical and financial feasibility. However, the committee was unable to reach a consensus in selecting the “best” proposal due to two controversial issues. These issues related to the scale of the facility (whether the service should be based on a large supercomputer or a smaller machine) and its management (whether it should be managed by UCC or the Science Department). Due to these disagreements, the chairpersons approached the VP of Student Services and described the difficulties faced by the committee. During their consultations with him, they indicated that two proposals received considerable support but neither of them received the unanimous approval of the committee members. The first proposal was by Cray and it was recommending the acquisition of a Cray XMP-14 supercomputer. This solution represented the most powerful computer among the proposed systems and received the support of a few researchers who believed that a supercomputer was the only way to implement a Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Managing the NICS Project at the Royal Canadian University 337
numerically intensive service. The second proposal was by Convex and it was recommending the purchase of a Convex C220 vector computer. This solution was endorsed by UCC staff members and many researchers who felt that even though C220 was not a supercomputer, it would offer adequate computing power to the users at a significantly lower cost than that of Cray. During its discussions with the senior administration, the committee was informed that the university was not in a position to purchase the Cray supercomputer due its high capital cost and operating expenses. The committee was also asked to reconsider a proposal by IBM that had rejected during its early deliberations. The proposal recommended the purchase of an IBM 3090 mainframe computer using the AIX4 operating system. The cost of IBM’s proposed solution was about $4 million. After discussing the feedback of the senior administration, the selection committee reconsidered and rejected IBM’s proposal again. The director of UCC and the chair of the selection committee wrote memos to the VP of Student Services indicating that the Convex solution was preferable to IBM’s proposal, because the cost of the Convex C220 system was significantly lower than the cost of an IBM3090 even though both systems had comparable levels of performance. The following extract from the UCC’s director memo illustrates the reasoning behind his decision: The Computing Center recommends that the University purchase the proposed Convex C220 system. The Convex is the superior choice because of its price, software maturity, performance, and ease of installation and operation. The Cray proposal is operationally very expensive and carries too much risk in terms of future cost and installation difficulty. The IBM proposal is too expensive relative to the performance of the computer. In addition, industry observers currently caution against purchase of lowend IBM 3090 computer for economic reasons. Despite the negative feedback received from both selection committee members and UCC staff, the senior administration of the university continued to express a strong interest in IBM’s proposal and engaged in discussions with the vendor to refine it. According to a senior administrator, because of RCU’s relationship with IBM, the administration felt that it could significantly influence IBM in improving the terms of the proposal. Indeed, the relationship with IBM was a multifaceted one. RCU was among the recipients of the largest IBM donations in Canada. Also, at the time, IBM was contemplating the implementation of an air traffic control project (as part of a governmental contract) and RCU was being considered as a candidate partner for this project. A UCC staff commented on the attitudes of the senior administration: In the minds of many people, the NICS project was part of a bigger plan. It was part of RCU and IBM’s existing and future relationship. At the time, I think the university was relatively predisposed to work with IBM. I recall that at the time we were talking about having a major computer lab out here and RCU was one of the candidate partners for it. The sad part is that lab was tied to some air traffic control bids and it never happened. The apparent support for IBM’s solution was met with strong resistance by both researchers and UCC staff members. They were concerned with the high cost and poor performance of the IBM 3090 computer and the immaturity and instability of the AIX Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
338 Iacovou
software. As the following comment indicates, they attributed the administration’s support towards the IBM proposal to non-technical, political reasons: IBM’s proposal was not included in the short list of the selection committee, let alone be the top choice. Certainly the opinion of most of us was that [the selection of IBM’s proposal] was a decision made at very high levels of the university for political reasons that had to do nothing whatever with the technical suitability of the solution — but had do to with a relationship with IBM. That remains my opinion to this day. To address the financial and technological concerns raised by faculty members and UCC staff, IBM modified its initial proposal to make it more economically attractive for RCU. According to involved IBM managers, IBM was willing to reduce the cost and risk to the university because it was under a lot of pressure to maintain its market share (which was being attacked by manufacturers of smaller machines) and it wanted to establish itself in the growing UNIX market. IBM’s modified proposal recommended a three year large scale computing software joint study with the university. As part of this study, IBM was willing to give RCU free use of an IBM vector facility for the duration of the study, and transfer title of the machine to the university upon its conclusion. It would also waive any license fees for the use of AIX for three years, and offer at least a 50% discount for such fees after the completion of the project. IBM was also willing to provide RCU with an early support plan (ESP) for its pre-release version of AIX for a few months until the product became more robust and commercially available. In addition, IBM indicated that it was willing to assist RCU in securing external funding for the lease payments. Despite the significant improvements in IBM’s proposal, the resistance among the UCC staff remained strong. In an eight-page report to the VP-Student Services, one of the senior programmers strongly opposed the acceptance of the IBM proposal because of its risk. The report listed a number of unsuccessful installations (and some “disastrous” ones) of the IBM 3090 vector facility and stated that the proposed solution offered low performance and high level of software unreliability at a too high price. Other UCC employees voiced their strong opposition to the revised proposal as well. Unfortunately, as the following comment indicates, their opinions did not carry significant weight: It didn’t do any good saying that we had problems with this system because the decisions were made outside UCC. I don’t think any of us had the ear of the president’s office. Those of us who said things developed the reputation for being troublemakers. A couple of technical people who had said things at the VP’s level were at the point where they were being shut out. They had the reputation as being negative towards IBM and therefore anything said was not being taken seriously. People were basically jeopardizing their own careers by arguing levels about the UCC director. In fact, both of these individuals left UCC because of they way they were treated by the administration. After considering the new proposal and the recommendations of the committee and UCC, the VP of Student Services selected IBM as the winning vendor. Many UCC staff members felt ignored and were angered by this decision: The original decision to purchase an AIX system was viewed with total astonishment by most of the people on the technical side. They couldn’t understand why the decision Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Managing the NICS Project at the Royal Canadian University 339
was made. [That this was a mistake] was extremely clear to the technical people. The trend of moving to smaller machines was not a new trend. It was clear that advances in RISC based systems would have dominated the mainframe market. Many of the opponents of the IBM solution attributed the selection of the administration to political, non-technical reasons: IBM interacts with the University at many levels. It interacts with the university on different kinds of computers and different faculties and different kinds of uses so presumably all these different interactions were taken into account when the final decision was made. The result in the end was that the university decided that when factors other than the price performance ratio that we were looking at narrowly on this project were taken into account, IBM could make a proposal that would benefit the university better overall. That may well be the case. I mean there are other things that come into play when you look at this from the president’s office. But I think it is true that the technical assessment was not that we buy an IBM computer. According to the joint study agreement that was signed between IBM and RCU, the university was to receive an IBM 3090/150S mainframe with a vector facility operating the AIX operating system under ESP.5 The primary of objective of this study was “to convert the major scientific applications from MTS (a non-IBM environment) to an IBM environment using AIX/370 and the Vector Facility”. In a related agreement, RCU was to acquire the hardware through a four-year lease. According to the lease, RCU was to make the following payments to IBM: $400,000 during the first year of the project; $650,000 during the second year; $1,275,000 during the third year; and $680,000 during the last year. According to an IBM executive, this transaction was structured as a lease (instead of an outright sale) because of RCU’s concerns about the eventual viability of the system: From our point of view as well as RCU’s point of view there was a bit of anxiety about whether this solution was really going to make sense given the changes in the industry. Certainly some RCU people questioned whether this was going to be the right technology in the long term, but many others thought it would be a good solution. IBM did everything it could to adjust to the new environment. But, as you may know, when IBM was getting into the Unix area, it kind of stumbled a couple times in terms of the basic boxes it was making. Finally, IBM agreed to purchase RCU’s Amdahl 5860 computer for $165,000. These agreements were supplemented by a standard Government Term Lease Agreement as RCU was a provincial university. This agreement included a “non-appropriation system return clause” that would allow RCU to return the machine if it was not able to receive appropriations of sufficient funds to make the lease payments after making bona fide requests for such funds.
NICS Installation
When the IBM mainframe was delivered to RCU, an early release version of the AIX operating system was installed. UCC staff members tested the system and quickly Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
340 Iacovou
identified a number of issues related to the performance and reliability of the system. UCC asked IBM to guarantee that it would address 18 specific issues that were identified by its staff during its early testing of the mainframe and software. Included in these issues was the need for a usage tracking software module that would enable RCU to track usage and bill its users. Six UCC staff were assigned to maintain and support NICS during its pre-release stage. Also, 25 faculty members and their graduate students participated in the AIX prerelease testing. These researchers used the computer for free and reported problems to the UCC staff. A number of bugs in the system were identified by the users: There were a number of problems with the pre-release version of the software, including wrong answers. The Pentium problem all over again. So there would be a situation where they would simply get the wrong answer in very simple situations so it wasn’t a complex one in a billion chance that the Pentium was. It was common and it was to the point that one plus one didn’t equal two. It was that simple. Despite these problems, UCC spent a significant amount of energy and resources on getting the NICS service operational. The attitude of UCC’s management towards the implementation of NICS is reflected in the following comment: Once the decision was made it was our baby. And it doesn’t matter if you don’t like the baby to begin with, you still have to work with what you have, you still have to offer the services based on resources we have. So I think we took the point of view of doing everything we possibly could to make that system work. We worked with IBM in solving a whole number of technical issues. Due to bugs in the system, the announcement of NICS was delayed several times. As a result, IBM extended the ESP indefinitely until the system was put into production. To compensate RCU for some of the additional costs associated with these delays, IBM paid $300 thousand to the university. Two hundred thousand dollars were allocated to the Science Department and $100 thousand to UCC. This allocation upset many UCC staff who felt that the UCC should be the only recipient of these funds (as it was now operating on a cost-recovery basis). After a number of delays, the usage tracking software was installed and the system was finally put in production about a year after its delivery. According to the usage policy, interested researchers had to apply to the VP of Student Services to receive approval for an account. The usage rates that were established for the service were: $4.50 per minute for CPU usage; $0.50 per MB-minute for memory usage; and $0.08 per MBday for disk usage. As part of the newly implemented chargeback policy, these charges had to be paid using distributed computing funds or research grants. A project participant described the effects of this policy on usage: There were a number of technical problems that meant we couldn’t charge for the service initially but those eventually got resolved and at the point they turned charging on, usage dropped dramatically. Basically the system was being used by a whole number of people who simply didn’t have the money to pay for computing. Graduate students were using it; some researchers were using it. The assumption that they made Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Managing the NICS Project at the Royal Canadian University 341
going into the project was that there would be funds available for this style of computing. However, as it turned out, the university budgets were being cut, researchers were not getting access to large amounts of grant money, and nobody had the money to pay for the service. At the same time, personal computing was becoming more powerful and affordable. And so from their point of view, the decision people were making was: do I run my programs on my PC or do I do it on NICS? If I can do it on the mainframe for free, then I’ll do that. Then I use my PC for word processing too. If I have to pay for it however, I’m going to bring it back and put it on my PC. So, when we actually got the charging operational on NICS, on that particular date, the usage dropped from 100% to less than 7%. In the first day of the service we generated something like $3.87! NICS continued to be operational with disappointing results. The average monthly CPU utilization for the first six months of its operation was about 480 hours (which is equivalent to about two-thirds of the system’s theoretical capacity). While the users continued to express their increased need for UNIX-based computing services, very few of them were willing to use NICS. Due to the wide availability of inexpensive RISC-based UNIX workstations and powerful personal computers in the marketplace, most users felt that the NICS rates were too high and a few of them began purchasing their own workstations and other computers using their research funds. While NICS was being put in operation, the administration of the university established a new senior administrative position to improve the relationship between UCC and the university (which was further deteriorated by the selection of IBM as the NICS’ vendor) and better coordinate the various technology units on campus. To fill this position, Dr. David Williamson was hired as an associate vice-president (AVP) of Information and Computer Systems. As a result of this administrative change, the reporting relationship between UCC and the senior administration was altered. The director of UCC began reporting to Dr. Williamson instead of the VP of Student Services. Dr. Williamson commented on the responsibilities of this newly created position: I think the university computing services has always had at RCU, and in fact across the country, a very good reputation as a first class service. The other departments in my portfolio were relatively small and less significant at the beginning, when I started. We sort of expanded their role in a way that made it more integrated over computing and communications. Even though the computing center was quite well respected for what it did, due to changes within the university and in the computing environment in general, the administration saw a need for reorganization and direction and probably for getting on with a different role for the 90s than its role in the past. Soon after his arrival to RCU, Dr. Williamson became aware of the issues related to NICS. Overall, he was concerned that UCC was losing both political capital and revenue by not taking advantage of the inexpensive RISC technology to offer UNIX computing services to the users (other than the expensive NICS service). More importantly, he was concerned with the ability of UCC to raise sufficient funds from usage charges to meet the next lease payment to IBM. As Dr. Williamson commented, this was a significant concern for him as the UCC budget was severely limited (due to the decentralization of the computer funds): Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
342 Iacovou
My initial awareness [with NICS] had to do what I think was about 300 thousand dollars, or something of that neighborhood, of IBM donations. From what I was told, essentially IBM gave the university that amount because it really hadn’t delivered what we had anticipated. I had a discussion with the VP about this. As I dug deeper to better understand the situation, I was getting the feeling that this project was not going to take off. In fact, because we were moving towards a cost-recovery model I was worried that when we eventually put the service in production it would not generate enough money for the next lease payment that was coming up. UCC, which was part of my portfolio, was expected to cost-recover all of its investments and I didn’t think that was possible with this system. So, I began examining the issue in more detail and kept the administrators closely informed and involved with all the decision making. To address the first concern, the lack of inexpensive UNIX service, UCC decided to offer a second UNIX service using inexpensive RISC workstations to users who could not afford to purchase their own workstations or use the NICS service. To implement this service, UCC acquired a Sun SPARCstation 2 and two Silicon Graphics computers. After considering the acquisition and maintenance costs of these computers, UCC set the general UNIX usage rates. These rates were significantly lower than those of NICS. Specifically, the rates for the general UNIX service were $0.50 per minute for CPU usage; $0.063 per MB-minute for memory usage; and $0.04 per MB-day for disk usage. Shortly after its introduction, the UNIX service became very popular: there were more than 900 accounts on this service, with an average CPU utilization of 670 hours per month. Due to the high performance and low cost of the new UNIX systems, many NICS users moved their accounts to the new service. However, about 75 of them, who needed to use the vector facility and certain AIX-based software on the IBM system, continued to use NICS. To address the issue related to the financial viability of NICS, Dr. Williamson spent many hours consulting with UCC staff and senior administrators. This was a critical issue for RCU which was facing the deadline of the third lease payment. At the time, the outstanding lease payments totaled almost $2 million while the market value of the IBM computer was estimated to be about $50,000 (according to the Computer Merchant’s Price Guide). The low demand for NICS and the depreciated value of the hardware itself made the continuation of NICS an extremely difficult choice for Dr. Williamson.
CHALLENGES FACING RCU Issues
Two years after the inception of the agreement and about a year after the introduction of NICS, RCU was faced with the decision of whether or not to continue the service. The timing of the decision was also affected by the preparation of the university’s annual budgets, which needed to take into consideration the lease payments to IBM. If RCU decided to continue to operate NICS, Dr. Williamson knew that UCC’s budget would be severely impacted by the lease payments and the lack of revenue. Although abandoning the project could reduce the financial loss to the university, he knew that it would be difficult to convince IBM to accept the system’s return. Also, the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Managing the NICS Project at the Royal Canadian University 343
abandonment of NICS could create a potential public predicament for RCU. Abandoning the service after having paid about $2 million for it and having spent over a year to eliminate software bugs could be a major embarrassment for the university. On the other hand, if the university continued to operate NICS, there would be no available funds for the acquisition of additional RISC computers, further limiting the ability of UCC to respond to the needs of its users and increasing its credibility liability. Furthermore, Dr. Williamson needed to decide whether it was indeed wise for UCC to continue offering centralized UNIX services as RISC machines were becoming less and less expensive enabling academic departments and even individual researchers to acquire them on their own (and therefore reducing the demand for a centralized service). In summary, Dr. Williamson needs to develop a plan to manage the issues related to the future of NICS while (1) ensuring that the university avoids additional financial losses and a public embarrassment and (2) ensuring that the needs of users for UNIX services are appropriately satisfied. What should he do?
ACKNOWLEDGMENT
The author would like to express his appreciation to the individuals who participated in this case study and acknowledge the insightful comments provided by Albert S. Dexter and the anonymous reviewers.
ENDNOTES
1
2
3 4
5
Certain names and other information have been altered to protect the identity of the organization and individuals involved in this case. In all other respects, the case provides an accurate account of the facts. The data presented in the case are based on structured interviews and an analysis of numerous documents (meeting minutes, agreements, memorandums, electronic messages, etc.). RISC stands for Reduced Instruction Set Computer. RISC is a computer processor containing a small set of simple instructions. Such processors are capable of performing faster processing through the use of the limited instruction set, uniform encoding, homogeneous register sets, and simple addressing modes. UNIX is an interactive, time-sharing open operating system. AIX stands for Advance Interactive eXecutive, which is IBM’s version of UNIX. Even though IBM had just announced the development of AIX at the time, the software was not ready for commercial release and use yet. This was a single processor IBM 370 Enterprise System Architecture (ESA) machine and was rated at 12MIPS. The vector facility was rated at 10 MFLOPS. It had 64 megabytes of central storage (the maximum available on this model). Two IBM PS/2-70s were used as front-end processors. AIX’s Transparent Computing Facility (TCF) was used to connect these machines so that they appear as a single computer to the end users. Initially, the following software was installed on the NICS: AIX/370 operating system, FORTRAN VS compiler, IBM’s Engineering and Scientific Subroutine Library (ESSL), International Mathematical and Statistical Libraries (IMSL), Numerical Algorithms Group (NAG) Library, U.S. Department of Energy Laboratories’ SLATEC library, and standard UNIX utilities.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
344 Iacovou
Charalambos L. Iacovou teaches information systems at the McDonough School of Business at Georgetown University. His research focuses on the management of project failures, the adoption of information technology by small organizations, and the role of trust in electronic commerce. His papers have appeared in Management Information Systems Quarterly and the proceedings of conferences in Canada, Europe, and the United States.
This case was previously published in the Annals of Cases on Information Technology Applications and Management in Organizations, Volume 1/1999, pp. 174-185, © 1999.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 345
Chapter XXII
Improving PC Services at Oshkosh Truck Corporation Jakob Holden Iversen University of Wisconsin Oshkosh, USA Michael A. Eierman University of Wisconsin Oshkosh, USA George C. Philip University of Wisconsin Oshkosh, USA
EXECUTIVE SUMMARY
This case presents the problems encountered at Oshkosh Truck’s IT Call Center and PC Services, relating to improving productivity and user satisfaction of the IT Department. The case presents the process of handling user problems, all the way from when it is received at the helpdesk until a technician resolves the issue at the user’s desk, how well the process works, and some problems associated with the process. Oshkosh Truck collaborated with University of Wisconsin Oshkosh researchers on developing metrics instruments and better processes to assist in improving performance. The effort focused on showing the value of a standardized PC platform to the rest of the organization. At the end of the case, although the goal was much closer, it was not attained, and its solution is left for students to solve. The Oshkosh Truck case can expose the students to the following key concepts of IT management: • • • •
Appreciate the complexities associated with supporting a large number of computers in a business. Discuss the issues associated with platform standardization. Discuss the problems associated with IT investment justification. Discuss how to account for costs of IT, including total cost of ownership and activity-based costing.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
346 Iversen, Eierman, & Philip
•
Understand the implementation and value of a metrics program, including issues related to design and use of specific data-collection methods like online questionnaires, automated data collection, and interviews.
BACKGROUND
Oshkosh Truck Corporation (OTC) was founded in 1917 in Oshkosh, Wisconsin. The company began by specializing in the design and manufacture of all-wheel-drive trucks. The company currently focuses on the design and manufacture of trucks and truck bodies for concrete placement, snow-removal, refuse hauling, fire and emergency, and defense markets. OTC includes brands such as Oshkosh, Pierce, McNeilus, Medtec, Geesink, and Norba. While the corporation’s headquarters remain in Oshkosh, it has manufacturing operations in six states and four other countries. Oshkosh Truck takes pride in the high quality, performance, and reliability of its vehicles. This focus, along with a strategic focus on product development, lean cost structure, and global distribution for specialty markets, has produced a financially successful organization. Since 1996, the company also has engaged in an aggressive acquisition strategy to enhance its product offerings, diversify, and fuel global growth. Between 1996 and 2002, the organization acquired eight different companies including Pierce — a manufacturer of fire trucks, Medtec Ambulance Corporation, and the Geesink Norba Group — a European manufacturer of waste hauling truck bodies. OTC is a Fortune 1,000 company with sales of over $1.7 billion in fiscal 2002. Sales are expected to increase in fiscal 2003. The organization has a number of competitors in different markets, including: Volvo Truck Corporation, Wheeled Coach Industries, Wittke Waste Equipment Ltd., and Advanced Mixer, Inc.
SETTING THE STAGE
The IT division of OTC consists of a centralized IT group and four separate IT groups at Oshkosh Truck, Pierce, McNeilus, and Geesink-Norba (Figure 1 shows a partial organizational structure). The corporate IT group, headed by Bill Gotham, Director of Corporate Infrastructure, is responsible for the Call Center (help desk), and for establishing standards and guidelines for networking and computing systems. Historically, this group also has supported the servers for the Oshkosh Truck company, primarily because the servers are located in the same building as the group, in Oshkosh. Within the last few years, OTC has moved towards a centralized call center to handle requests for help (“calls”) from employees of Oshkosh Truck, Pierce, McNeilus, and Geesink-Norba. Typical calls include problems with network access including lost passwords, and problems with PCs, printers, and Microsoft Office. Most requests are made through telephone calls or e-mails. A significant portion of the calls are routed to the PC Services group or the Networks group within the company where the caller works. The Call Center also helps to solve certain IT-related user problems remotely, although Mary LaPine, Support Specialist at the Call Center, feels that the Call Center employees should be able to solve a larger proportion of such problems remotely with appropriate software and training. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 347
Figure 1. Partial organizational chart Robert Bohn, CEO, Oshkosh Truck Corp
Charles Szews, CFO
Dave Brantigham VP, IS
Bill Gotham Director, Corp. Infrastructure
Bart Kautza Manager, Corp. Desktop Services Paul Rosenquist Supervisor, Call Center
Director, IS Oshkosh Truck
Director, IS McNeilus
Director, IS Pierce
Director, IS Geesing_Norba
Troy Battleman Supervisor PC Services Oshkosh Truck
Supervisor PC Services McNeilus
Supervisor PC Services Pierce
Supervisor PC Services Geesink_Norba
Tom Vandenberg Corp. Networks Architect
Greg Mezera Systems Eng.
Corporate Desktop Services, supervised by Bart Kautza, is responsible for developing PC standards with a view to enhance PC support and to reduce the number of calls. In addition to the corporate IT, each company has its own IT function that includes PC support, network support, software support, and limited software development. For each company, a PC Services group supports the hundreds of PCs within that company. PC support includes hardware/software acquisition, installation, and helping users with hardware/software problems. Oshkosh Truck alone has over 700 PCs and about 800 users. The company has been using contractors for PC support. Currently, Troy Batterman, Manager of PC Services, has three such contractors working for him. In addition, the group that prepares requests for proposals (RFPs) has two technicians dedicated to support the PCs for its 30 employees. Microsoft Office is the predominant end-user software at Oshkosh Truck company. In addition, the company has been using J.D. Edwards OneWorld, an ERP system, on AS400s for over 10 years. Within the last few years, J.D. Edwards was implemented within the newly acquired companies, too.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
348 Iversen, Eierman, & Philip
CASE DESCRIPTION
It was early January 2003, and Bart Kautza was looking at graphs of the latest data from the user satisfaction survey and the proportion of tickets that had been closed within five days. Both graphs showed progress, but also some problems. Most of all, he was concerned with how to use the data effectively to communicate to business managers the need that he saw for more standardized PC platforms. Some significant holes in the data material prevented him from presenting the most convincing argument that standardizing the PCs would achieve a financial benefit to the company. In June 2000, the corporation had initiated a reorganization of PC services that centralized many of the duties associated with that function. Through this effort, Corporate infrastructure took over the responsibility for handling all desktop standards, hardware and software purchasing agreements, establishment and enforcement of procedures and mentoring. Additionally, the help desk function became centralized at Corporate, with a single phone number for help from anywhere in the world. The organization successfully installed HEAT, a call center support program from FrontRange, and the OTC Call Center was using it to handle employee computer problems in all its manufacturing operations around the world. However, when performance was evaluated in mid-2002, it was discovered that although HEAT allowed the support staff to handle calls more efficiently, the number of calls was not decreasing, the average time required to close a service request (a “ticket”) was not falling, and the number of “open tickets” remained as high as ever. In other words, the system had only a marginal effect on the staff’s effectiveness. Bart felt that the cause of these problems was that OTC did not have a standardized PC platform: The main reason we have all those calls out there is our mixed environment. We need to get to a standardization of equipment and operating system and application version. I think we’re always going to have a lot of calls until we get standardized. His gut feeling was that most of the problems were caused by old hardware and PCs running older versions of the Microsoft Windows operating system. The different platforms required PC technicians to be able to solve problems in a number of different environments. Additionally, it was difficult to establish a uniform set of procedures to solve a certain type of problem. Each PC was an individual problem center, with its own problems and set up. While some of the problems were assumed to be potential problems on other PCs with a similar configuration, tracking down those PCs was too much effort. It was easier to wait until the problem was reported and then fix it. Bart reasoned that if all PCs were standardized, any identified problems that were likely endemic could be patched, or fixed en mass. This would have the effect of reducing the number of calls, time to close tickets, and overall PC downtime. However, his suggestion to standardize was not well received. Upgrading all the PCs and maintaining a standard platform going forward would require a significant investment by the corporation. Dave Brantingham, VP of IS, and Bill Gotham, Director of Corporate Infrastructure, were not insensitive to this problem, but recognized that such an investment would be met with resistance by business departments, which pay for their PCs, but don’t get charged for the support provided by PC Services. Dave Brantingham expounds on the issue: Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 349
The problem is that management views IT investment as a black hole. Money goes in, but where are the results? We need a stronger ability to justify IT expenditures…to show this is what you are getting for your investment. Or, this is what you couldn’t do if you didn’t invest in IT. Dave realized that to address this problem, IT needed to show that when PCs were not performing properly, it had a real impact on the organization. As a member of the University of Wisconsin Oshkosh MIS Advisory Board, Dave was in close contact with professors from the school. He turned to them for help, thinking that this project would be perfect for their newly formed Center for Software Excellence (CSE). The CSE is an organization created by the MIS Department at the university to support both business and research. A primary goal of CSE is to help businesses solve actual business problems by including university MIS professors on the problemsolving team. A secondary goal is to provide the professors material for researching and teaching MIS problems and practice. Three members of the university’s MIS Department (the authors) joined a team in Corporate IS at Oshkosh Truck. On June 3, 2002, Bill Gotham headed the kickoff meeting of the new team. He set the stage for the project: At the PC desktop level, we don’t have a strong set of metrics. I’m a strong believer in: if you can’t measure it …you can’t fix it. We feel we’re doing a good job. We don’t have any metrics that say we’re doing a good job. We’re not capturing complaints either. We don’t know if we’re working on the right things at the right time. I want to know, for example, what is the percentage of tickets for business-critical tickets (emergency and high priority) that are completed within five days and what is the percentage that take longer than five days? That will give us an indication of if we are doing the right thing at the right time. If the percentage completed within five days is higher than the percentage completed in over five days, then I’m a happy camper. If not, I’m an unhappy camper. The meeting resulted in the formation of the OTC Desktop Steering Committee with Bart Kautza as team leader. Other team members were Paul Rosenquist (Supervisor, Call Center), Troy Batterman (Supervisor, PC Services, Oshkosh Truck), Ryan Collier (New Technology Integrator, Corporate Desktop Services), and the authors. The committee’s charge included two related goals: (1) determine the cause and cost of the excessive number of support calls, and (2) determine how to reduce the number of calls and the amount of time needed to solve problems. It was determined that the first order of business was to gather information. The authors were charged with this task because they were outside the corporation and, it was reasoned, they could bring a more objective focus to the information collection. The team was to reconvene in a month to discuss the CSE team’s findings. During the next month, the authors met and interviewed Bart Kautza, Ryan Collier, Mary LaPine (Support Specialist, Call Center), Troy Batterman, and Paul Rosenquist. Their goal was to identify the current practice of PC support at Oshkosh Truck, the information collected during the support process, the information required for effective support, and the information needed to determine if investment in standardization was justified. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
350 Iversen, Eierman, & Philip
Oshkosh Truck Corporation employs a centralized approach to its PC Support. All requests for service come into the Call Support Center (CSC), which gathers information from the user and then dispatches the service ticket to an appropriate technician. The technician fixes the problem and then “closes the ticket” by entering information directly into the HEAT system. Mary LaPine explains the operation of the Call Support Center: We have a staff of four people. Most of these are interns from local colleges or technical schools. The CSC is staffed from 5 am to 9 pm on workdays, and the staff is on call after hours. We rotate on-call status weekly. On a slow day we receive about 50 calls. Our primary job is to collect as much information as possible about the problem. A large number of these calls are password related. Most employees have two different passwords and many have more than five. They forget them often. We handle this problem right here by resetting the password. Other problems are handled by other portions of the company. Who handles the problem is determined by the type of problem. We record all calls in the HEAT system. When a call comes in for a problem we don’t handle right here, we first specify in HEAT the company that produced the call. We handle calls from four different companies. After the company is selected, we then attempt to determine the call type. The call type is a broad description of the problem. After the type of call is determined, we then determine the sub-call type. This is a more specific determination of the problem. The rest of the information collected is about the user and machine. After we collect this information, our job is done and the call becomes a ticket, which is routed to the appropriate person. Although there are several different types of problems and groups of people that handle those problems, this project focused on fixing PC problems at the Oshkosh Truck company. The organization refers to these problems as “break/fix problems”. Break/fix problems with hardware and software are handled by PC Services in Oshkosh Truck. Paul Rosenquist explains the procedure used by PC Services to handle tickets: After the Call Center dispatches the ticket to a technician, the technician is supposed to acknowledge receipt of the ticket to the affected user. This is done to reduce the number of user callbacks because they don’t know the status of their service request. However, the technicians are not very good at this. After the service is performed, the technician is supposed to identify the cause. For example, was it user error or training issue, was it a virus, was a software upgrade needed, or was software needed not installed? This information is entered into the HEAT system by the technician. However, currently the field is not required to close the ticket, and it often is not included. Also at closing, the technician is required to enter the amount of time needed to fix the problem. Technicians are currently very bad at entering information at all levels. HEAT has a lot of fields that currently do not hold information. These fields could be used to keep information on the configuration of each user’s machine. The corporation
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 351
currently uses ZENworks to keep track of all PC configurations. Zen automatically goes out and reads PC configuration and stores it in a database. Zen is not totally up and functional at this point. Additionally, it does not work well with some of the older PCs in the organization. HEAT has the capability to interact with external databases, but we currently do not do that. We have to figure out how to do this so that all information about a PC is available when a support call comes in. Troy Batterman explains the process used within PC Services to handle tickets: Our responsibility is for PCs, laptops, and printers for the Oshkosh Truck campus. The campus consists of seven buildings spread over a 10-mile radius. We support approximately 800 users and 700 machines. Our job is to fix hardware and software problems on these machines, procure new equipment from pre-specified vendors, install new software, and set up new machines. When a ticket is routed to us by the CSC, we start by recognizing the ticket. This is generally done at the end of each day with an e-mail message. We notify the user that their ticket has been received and provide them with a target fix date. We use the priority levels assigned by the HEAT system to determine the target date. Emergency and High Priority tickets mean the user cannot perform any work. These we must respond to within two hours and fix by the end of the next day. Routine tickets are problems the user can work around. We respond to these tickets by the end of the business day and have a target fix in five days. The lowest priority is a project, which is getting a new machine. The target time for this is 21 days. We begin each day by meeting to discuss the open tickets. These are the tickets that have been acknowledged and scheduled. I often assign tickets to technicians, but also let them choose tickets at times. We also look at new tickets, which have come in, and schedule them if required. The assignments make up the technicians’ work load for the day. We may also discuss difficult problems at this time. We generally close 25-30 tickets per day. The centralization of PC support with one point of contact for all users has made the process smoother and more understandable. However, there still are problems. Bart Kautza: PC Services is not perceived well at all in the rest of the company. We’d like to have the number of open tickets below 20. It is now at 80-90. We recently began logging complaints into HEAT, but don’t have information on complaints further than two months ago. If I were to guess, the number one problem is customer service/satisfaction with PC Services. I also think that communication and the procedures used are big issues…the techs are supposed to do journaling as they work on a problem. They’re not very good at this. They also close tickets (in HEAT) at the end of the day. When they do this, they are supposed to enter the solution, the time to fix, and make changes to the call type if it was originally in error.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
352 Iversen, Eierman, & Philip
Ryan Collier: There’s a lot of inefficiency. The ticket contains only the facility of the user. Techs spend a lot of time searching for the person. Also, they don’t group their tickets by facility. They may drive to a location for a ticket in the morning and then drive back for another ticket in the afternoon. Tickets also don’t contain detailed information on the problem. If the tech arrives on location and can’t find the user and can’t figure out the problem…it’s a wasted trip. There’s a lot of repetition in the set of problems faced by the techs. Techs should know that if they fix a problem, they should also check the PC for other known problems. For example, an older version of McAfee causes lock-ups. They should update this program even if it’s not currently causing a problem. Techs don’t have any incentive to work fast; they are hourly contract workers. They just have guidelines on the number they should close per day. They often pick and choose tickets to meet these guidelines. Techs need to be held to higher standards. They currently do not have to worry about doing it well or fast. Mary LaPine expands on some of the problems and possible solutions: We have too many problem types. We should have a smaller, broader set, which can then be specified by the tech when the problem is fixed. They can do that now, but are not very good at entering the information. With the turnover and lack of training of our interns…two people may enter two different problems for the exact same call. I personally think, with more training, we could handle more problems here and send less out to PC Services. We have a remote control tool, but it’s not used much and we’re not supposed to use it. Troy Batterman: We’re fire-fighting. We’re stuck handling emergency and high-priority tickets. Highpriority is also not just problem-based. Some users are assigned a high priority no matter what their problem is. My boss is often called directly from other higher-ups and he directs me to handle their problem right away. High-priority problems are causing lower priority problems to be closed later. I know we are missing target dates, but I don’t really have any hard data to support that. We are starting to track that. We have no standard operating system or hardware across the company. We don’t have a life cycle approach to handling PCs. A machine is used until it is not feasible to fix and then a new one is purchased. We also use trickle-down computing. If a user gets a new machine and the old one is still serviceable, that machine goes to a different user. The root cause seems to be older systems. We have Pentium 75s that are stretched to the max. They cause emergency problems because the whole machine just dies. Also, many of our machines are still running Windows 98, which causes a lot of problems. We update the operating system on machines that can handle it. But this is not always done even if it’s feasible. I can tell you this…new systems mean that support time goes way down.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 353
The centralization of PC support also has created a wealth of information for monitoring the performance of the support organization. However, it is not clear how these reports are used or what they mean. Paul Rosenquist explains the reports: HEAT comes with about 300 generic reports. I’ve been modifying some of these, but the primary report we do, that goes out on a daily basis, we create pretty much manually. It shows the 10 oldest support tickets and the 10 oldest projects1 and who they’re assigned to. The second report we do is an assignment analysis. It shows the number of assignments (who got the ticket) and the number of tickets that were created the previous day. The third report from HEAT is weekly and shows the number of open tickets by how long they’ve been open, one to five days, six to 10, 10 to 15, and more than 15. The fourth report is monthly, and identifies the number of calls per day and the percentage that go to answering machines rather than operators. The final report we do on a regular basis is number of assignments made and closed per week. Mary LaPine: I get the reports, but I don’t have a clue what they mean. I don’t really know what management wants in them. I sometimes help create the reports, but I don’t know why I include what I include. I don’t know who gets them or what they do with them. The reports are just raw numbers. We don’t do any interpretations. The Desktop Steering Committee met again on July 9, 2002. At the meeting, the professors presented their findings and recommendations (the report is reproduced in Appendix A.6). Dr. Jakob Iversen explains: We found two areas to be deficient. The first is the data collected for decision making. We need to collect data on the cause of the problem, the time required to fix the problem, the cost of downtime to the organization, and user satisfaction. Some attempt has been made to collect this information. However, collection has been unreliable at best. Some important data has not been collected at all. The second area that we found to be deficient is the procedures used to handle tickets. We think the process used is weakly enforced and is deficient in some areas. A new procedure needs to be developed and enforced. Dr. Michael Eierman continues: The report we handed out details our recommendations. I should first note that these recommendations are aimed at achieving three goals: (1) reducing the number of calls, (2) reducing the elapsed time between receipt of a call and closing the ticket, and (3) improving customer satisfaction. Our first recommendation is that you begin collecting data from users on lost time and satisfaction with PC services any time they have a ticket closed. This information would help determine the actual cost of a PC problem and how well the service provided by PC Services is perceived by the users. The second recommendation is to link HEAT to ZENworks so that specific problems can be easily Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
354 Iversen, Eierman, & Philip
linked to specific PC configurations. This will help us learn if the frequency or severity of specific problems is associated with specific hardware configurations, and may help develop an argument for standardization. The third recommendation is to have the technician close the ticket at the user’s desk and require that in that closing procedure they record the problem cause and solution, working time required to fix the problem, and elapsed time spent on the problem. Closing at the user’s desk when the problem is fixed will allow us to capture accurate information on these items, which may, in turn, reduce the number of calls and help figure out the actual cost of each ticket. The fourth recommendation is to increase the use of data for decision making. This includes developing reports that are designed to answer specific questions for specific decision makers. The final recommendation is a set of suggestions aimed primarily at improving the PC support process to help reduce the time required to fix a problem. These items are detailed in the report you have. Dr. George Philip: We believe that to address many of these recommendations, you need to develop a metrics program. First, I will briefly explain the metrics program. A metrics program should only be developed in the context of solving some organizational problem. We believe the report we have given you provides that context. Theoretically, a metrics program begins with a problem or goal that requires analysis to determine a possible cause of the problem. After this, a measurement program is implemented to collect data. The data is then analyzed to determine a possible solution. We now have determined a problem and possible causes. The next step is to develop the measurement program. In doing this, we need to keep in mind several important factors. First, we need to start simple, focusing on a few simple variables before developing complex measurements. Second, we need incentives for those who supply the data. Third, we have to make the program highly visible and make sure that people know that this data will not be used against individuals. Fourth, the data must be used in decision making. The team went on to discuss the recommendations. Many issues were identified, and often team members learned about information currently captured or reported that they were not aware of. The team also discussed priorities for addressing recommendations. After about an hour, Bart took over and gave the group direction: We need to go after recommendation one…because that’s where we really get at reducing the number of calls…that’s our number one goal. And Troy, you guys meet every day; if you like some of these things in here (the recommendations), pick from them. Say, “hey guys, these are some things you are going to do on every call. Make this checklist. Every time you go out, use the checklist because that will help reduce the number of calls”. Bart suggested a two-pronged approach for the team. The first prong focused on improving the service call process to attempt to reduce the time required to address a support call and help reduce the number of calls. Troy Batterman was assigned to take the lead on this. The second prong focused on developing and implementing a metrics Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 355
program to measure the impact of down-time (the time users were without PC use) on the cost of doing business, and to identify the factors that were causing the large number of support calls and the long time required to fix the problem. Bart assigned Paul the task of modifying the HEAT ticket closing form so data on the problem cause and solution as well as time to fix the problem were required to close the ticket. The techs were not very good at doing this when it was not required, but to get good data, the techs need to do this. The authors were assigned to develop the metrics instrument. In developing the metrics instrument, the professors decided to start simple with a limited survey that would address three important questions: (1) Are users satisfied with the service offered by the support function? (2) Are the technicians performing their job well? and (3) How much does PC downtime cost the company? To address the first question, three questions were included in the survey. Users were asked to rank their responses on a scale of 1 to 5, where 1 was Very Satisfied and 5 was Very Dissatisfied. The questions were about the courtesy and professionalism of the people who worked on their problem and their satisfaction with the overall service. In an August meeting of the OTC Desktop Steering Committee, Dr. Jakob Iversen explains: It is important to begin by determining the general level of satisfaction users have with the support function. This information can be used as a baseline for determining if any changes we make to the support process are having the desired effect. To address the second question, four questions were included in the survey. Users were asked to use the same five-point scale to answer questions on their satisfaction with the responsiveness and communication effectiveness of the technician, the expertise of the technician, and the time it took to fix the problem. Dr. Jakob Iversen explains: These questions will focus on the performance of the technicians. They will help us determine if the new procedures we are developing are having the desired impact on keeping the user informed of progress and reducing the time required to close a ticket. To address the final question, two questions were included in the survey. These two questions were designed to assess the impact of the problem on the organization. They differed markedly from the other seven questions. The first question asked the users to enter the number of production hours lost due to the problem, and the second asked the users to evaluate the type of impact that the problem had on the user’s productivity. For this question, users were asked to read two scenarios, A and B, and determine if their situation was: (1) Exactly Like A, (2) Mostly Like A, (3) Between A and B, (4) Mostly Like B, (5) Exactly Like B. The two scenarios were: Scenario A: I had virtually no loss of production as a result of this problem. I was either able to work fully productive on other issues or was able to get around it with at most a minor loss. Scenario B: This problem greatly hampered my ability to perform. It was very difficult for me to do anything productive. While I may have been “working”, the things I did weren’t necessary and/or added little value to the company. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
356 Iversen, Eierman, & Philip
Dr. Michael Eierman explains: The raw number of hours lost gives us an idea of how much time the user lost because of the problem. However, it does not tell us how serious the lost time was. The scenario question allows the user to indicate how important the loss of productivity was…we realize this survey is short and does not address all our concerns. We view this effort as a beginning that addresses our most important questions. As we begin to collect data and refine the operation of PC services, we can change the survey to address new concerns or collect additional data. We want to start with a small survey to increase the chance that the users will take the time to answer the questions. On August 20 the team met to approve the new ticket handling procedures, which Troy Batterman implemented the next day. The survey took until September 12 to develop. After the survey instrument was operational, it was implemented on a test basis for two weeks. The survey procedure randomly sent an e-mail to 50% of the closed break/ fix tickets. The e-mail was an invitation to rate the service the user felt he/she received. Included in the message was a URL to an intranet questionnaire (see Appendix A.1). On September 25 the team met to review the preliminary results. In the first test run of the questionnaire, nine responses were received. In reviewing the responses, it became clear that some of the respondents had misinterpreted the answer scale. This led to a redesign of the layout of the questionnaire. Full data collection started on October 24, 2002, but when the team met on October 30 to discuss the results, only 30 surveys had been completed. It was determined that more data was needed. Data collection continued until January 14, 2003. In this period, PC Services closed 850 break/fix tickets, the questionnaire was sent to 304 of those tickets, and 126 questionnaires were filled out.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
In the middle of January 2003, Bart and the team had data available from a number of different sources (Appendix A.2, A.3). The time to make a decision on how to move forward was at hand. The survey data showed a fairly positive attitude from the respondents toward PC Services. This was a welcome change that could be the result of PC Services implementing some of the recommended changes in the PC support process. This could indicate that the need for large-scale change in the computing platform was not really warranted. On the other hand, the survey data also showed that the 125 respondents identified a total of 227 hours of lost productivity for the problems that their computers had. The team was unsure of how to interpret this productivity data. If they extrapolated this to the total number of break/fix tickets that PC Service closed during the survey period, it would mean that over 1,500 productive hours were lost. How should they put a dollar value on those hours? How accurate were those figures? Did the respondents count only the hours that they lost, or did they also include any hours lost by those impacted by their inability to do work? Ryan, for example, brought up a case where the shipping clerk’s PC was down for two hours, and during the time period many truckloads Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 357
of materials just sat idly in the lots waiting to be checked in. Moreover, did this data provide enough cost justification for standardizing the PC platform? Adding to the dilemma were the survey results that showed that respondents felt, on average, that the impact of PC down time of productivity was minimal. Respondents rated the impact as Mostly Like the scenario that showed virtually no loss of productivity from down time. Did this mean that PCs themselves had little impact on productivity? Was it possible that computers were fixed quickly enough that it did not constitute a major problem (users might be able to do other work while waiting for the computer to be fixed)? Did those respondents that indicated that there was a significant impact on productivity due to downtime really experience a significant loss or were they just frustrated? The team also examined data from the HEAT system. This data showed that PC Services improved the percentage of tickets closed per month over the past year. However they still had not attained the organizational goal of closing 80% of tickets within five days. Also, the volume of tickets has remained steady at 200-375 tickets closed per month. The team was uncertain as to how to interpret this data. The improved performance in ticket closing was noteworthy and indicated that some of the changes put in place were having the desired effect. However, since they had not attained their goal, perhaps more work on the ticket closing procedure needed to be done. Additionally, while the ticket closing performance was improving, the number of tickets closed per month had not increased, indicating a decreased number of tickets. This information raised additional questions: If their efforts were reducing the number of tickets and with it, lowered productivity losses, should they focus on further improving their procedures and forget about standardization as a way to reduce PC Services costs? On the other hand, these reductions may not have been related to the new PC Services ticket handling procedures. Finally, Bart noted that they still had not been able to connect ZENworks to HEAT. Therefore, they did not have any information on what platforms were causing the most problems. This information could have provided a clear indication that standardization would reduce the cost of PC Services. Should they continue to work to connect the two systems? Did they need that data to make the case for standardization, or would productivity loss and cost to repair be enough? Did it matter at all what platforms were causing the most problems if they could continue to reduce the total number of tickets?
The Alternatives
Bart finally came down to deciding between three different options to present to Bill Gotham: Option 1: Standardize the platform. While the number of calls is showing a downward trend, the team still has a problem meeting its goal of 80% of the tickets closed within five days. They now have data on the cost of these problems, and although they are unable to tie that cost to specific PC platforms, it may be enough to convince upper management to approve a platform standardization plan. Option 2: Get more data. The major weakness in the current data set is the lack of integration between HEAT and ZENworks, which means that it is impossible to identify the platforms that cost the most in terms of number of service tickets and lost productivity. There is also an opportunity for more detailed data on the costs Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
358 Iversen, Eierman, & Philip
by sending the customer survey to all individuals that have a break/fix problem. There might even be other sources of data that they had not yet identified, which would also help shed some light on the situation and support Bart and the team in making a decision. Option 3: Improve procedures. Because the number of calls is beginning to show a downward trend, the cost of platform standardization may never be justifiable. The team should therefore continue with the current data collection efforts only and concentrate the effort on improving the procedures that may reduce the number of calls and the time to service individual calls. Which option(s) should Bart present to Bill? Are there any other courses of action? Should Bart and Bill take any option to Dave Brantingham?
FURTHER READING
Iversen, J. H., & Mathiassen, L. (2003). Cultivation and engineering of a software metrics program. Information Systems Journal, 13(1), 3-20. Niessink, F., & Vliet, H. V. (2001). Measurement program success factors revisited. Information and Software Technology, 43(10), 617-628. Oshkosh Truck. Availalbe at http://www.oshkoshtruck.com/ The HEAT system. Available at http://www.frontrange.com/heat/ ZENworks. Avaialble at http://www.novell.com/products/zenworks/
ENDNOTES
1
2
A project includes purchase of new software, computer, and so forth, and naturally has a longer time to close than regular tickets. The recommended questionnaire was largely adopted and can be seen in the Appendix.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 359
Appendix 1. HEAT Questionnaire
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
360 Iversen, Eierman, & Philip
Appendix 2. HEAT Questionnaire Response Data Survey Question
1. The courtesy and professionalism of the Support Desk analyst that took your call? 2. The courtesy and professionalism of the person who worked on your trouble ticket? 3. The responsiveness of the technician that worked on your trouble ticket? 4. Communication about plans to fix the problem (i.e., notification of any delays, etc.)? 5. The expertise of the technician that worked on your trouble ticket? 6. The time it took from reporting the problem to fixing the problem? 7. Your overall satisfaction with service on this trouble ticket? 9. Impact on productivity (1, none - 5, complete loss of productivity)
Average Response 1-Very Satisfied, 5-Very Dissatisfied 1.44 1.51 1.80 2.13 1.67 2.23 1.92 2.11
Heat quality survey 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%
Question1
Question2
Question3
Question4
Question5
Question6
Question7
Question9
Very dissatisfied
2
4
6
13
4
12
7
8
Somewhat dissatisfied
0
3
6
8
5
12
10
9
Neutral
11
9
15
23
17
24
15
20
Somewhat satisfied
25
21
29
21
19
23
28
41
Very satisfied
88
89
70
61
81
55
66
48
Questions
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 361
Question 8: How many productive hours were lost due to this incident? Question 8 Hours Lost
# Responses Total hours 0
76
0
1
23
23
2
12
24
3
3
9
4
2
8
5-10
5
25
11-20
0
12
21-30
2
46
31-40
2
80
Total
125
227
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
362 Iversen, Eierman, & Philip
Appendix 3. HEAT Ticket Data
The graph that follows shows the percentage of tickets that are closed within five days and after five days of being opened. The organizational goal is that 80% of all tickets should be closed within five days.
This graph shows the total number of tickets closed by month. This data is the sum of the data included in the previous graph. Total Closed 400 350 300 250 200 150 100 50 0 Total Closed
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
263
264
327
258
379
284
374
339
259
362
282
200
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 363
Appendix 4. New Call Handling Procedures OTC PC Services Call Handling Procedures Call Update Procedures Acknowledge Call • In the Assignment details be sure to acknowledge the call immediately after the user has been notified that we have the call and have scheduled to work on it. This notification can be in the form of a voicemail, e-mail, PC visit card, or direct communication. Journey Entry Details • Add any communications, such as e-mails and phone calls to the customer, as a new journal entry. This will help show what has been communicated to the customer. Call Closing Procedures All calls must be closed on-site as soon as the call is completed. Use iheat from any web browser. The iheat address to connect is http://10.xxx.xxx.xxx/iheat. Then click on the HEAT Call Logging icon. ALL of the following steps must be completed to properly close a call. Call Log Details 1. Update the Description field so that it properly identifies the problem that you worked on. 2. Update the Solution field with a detailed description of how the problem was resolved. 3. Be sure Company is set to Oshkosh Truck. 4. Verify/Update Call Type matches the problem of the call. 5. Verify/Update the Sub Call Type detail if applicable. 6. Select a Cause that matches the root cause of the problem you resolved. Assignment Details 7. Be sure that your name is in the Technician field. 8. Acknowledge the call if you haven’t already done so. 9. Enter Time Spent in Hours. This field must be filled in accurately before the call can be closed. 10. Resolve your assignment. 11. Close the call.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
364 Iversen, Eierman, & Philip
Appendix 5. Financial Statement OSHKOSH TRUCK CORPORATION Condensed Consolidating Statement of Income Fiscal Year Ended September 30, 2002 Net sales $ 1,743,592 Cost of sales 1,483,126 ----------Gross income 260,466 Operating expenses: 143,330 Selling, general, and administrative Amortization of goodwill and other intangibles 6,018 ----------Total operating expenses 149,348 ----------Operating income 111,118 ----------Other income (expense): Interest expense (21,266) Interest income 1,160 Miscellaneous, net (1,555) ---------(21,661) ----------Income before items noted below 89,457 Provision for income taxes 32,285 ----------57,172 Equity in earnings of subsidiaries and 2,426 unconsolidated partnership, net of income taxes ----------Net income $ 59,598 =========
All numbers are in $1,000.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 365
Appendix 6. Recommendations
This appendix reproduces the Recommendations Document that the professors presented to Oshkosh Truck on July 9, 2002. The only editing performed has been to format it to fit into this space. The recommendations proposed here are aimed at achieving three goals: 1. 2. 3.
Reduce the number of calls. Reduce the elapsed time between receiving a call and closing the ticket. Improve customer satisfaction.
Recommendation 1
Collect data from users on lost time and user satisfaction, after ticket has been closed. Also keep track of the percentage of users who provide the data. Rationale: There are two reasons for collecting this data: One, it can be used to determine the actual cost of a PC problem (in addition to the tech’s time), which also may help determine priority levels. Two, it can be used to determine the general user satisfaction with PC Services. Process:
When a “break and fix” type of ticket is closed, the user should receive an e-mail with the questionnaire as shown next2:
Recommendation 2
Link HEAT system to ZENworks so that the problem specified in a ticket can be linked to user’s system — hardware, operating system, and application software. Rationale: Linking the problem to the hardware/software would help determine relationship between hardware/software configuration and frequency/ severity of problems. This would potentially help reduce the number of calls.
Recommendation 3
Immediately after a problem is fixed, have the technician enter the root problem, the method of solution, and the time spent fixing the problem into HEAT in a standardized way, as specified in 3.1-3.5: 3.1. Within HEAT, provide a list of high-level methods for fixing problems and have the technician select all methods that were used (e.g., reinstall application, apply patch, reimage PC, change settings, etc.). Rationale: This would help to search for different ways of fixing a problem in order to reduce the time required to fix a problem.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
366 Iversen, Eierman, & Philip
3.2. Within HEAT, provide a list of high-level causes and, if applicable, another level of sub causes (this list could be identical to the list used by Support Desk to enter the problem as reported by the user). Have the technician enter the problem by selecting from the list(s). This is in addition to the descriptive comments currently entered by technicians. Rationale: While users may report symptoms, they may not be aware of the root problem. Identifying the real problem would help to better associate problems with solutions. This in turn could reduce the time to fix problems. 3.3. Have the technician enter the start and end date and time for the task of fixing the problem. Rationale: This would help to estimate the time to fix recurring problems and help with future scheduling. 3.4. Have the technician enter number of hours spent working on the problem. Rationale: This would help to figure out the cost of each ticket. 3.5. Have the technician close the ticket at the user’s site immediately after the problem is fixed. Rationale: This would help to improve quality of data entered by the technician. It is more difficult to remember activities related to a ticket, at a later time.
Recommendation 4
Increase the use of data for decision making:
• • •
Receivers of reports identify their goals and information needs. Producers of reports make sure that each report is aimed at meeting specific goals of the receiver and that information is presented concisely (1-page reports) at appropriate frequency. The reports should include the purpose of the report, the control bounds, and a description.
Recommendation 5
Suggestions that would reduce the time to fix the problems.
5.1. Combine calls within the same building into a single trip. 5.2. Call users ahead of time to make sure that they are available when the technician goes to user’s site to fix a problem. 5.3. Technicians journal all the activities related to a ticket.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Improving PC Services at Oshkosh Truck Corporation 367
5.4. Document procedures for fixing recurring problems/tasks. Create a checklist of things to check every time a technician visits a computer (apply latest patches, antivirus definitions, etc.) 5.5. Explore using remote control far more extensively by PC Service, and also by Support Desk personnel to fix additional low-level problems. 5.6. Improve communication between PC Services and Support Desk. 5.7. Align the goals of PC Services with those who do the support work: Explore ways to provide incentives for contractors. Explore having OTC employees do PC support.
Jakob Holden Iversen holds an MSc in software engineering and a PhD in computer science, both from Aalborg University. He is currently an assistant professor of MIS at the University of Wisconsin Oshkosh, USA. He has worked with software process improvement and measurement for six years. Michael A. Eierman holds an MS in management information systems from the University of Wisconsin Madison and a PhD in management information systems from the University of Minnesota. He is currently an associate professor of MIS at the University of Wisconsin Oshkosh, USA. His current research focus is in object-oriented technologies. George C. Philip holds a PhD in operations research from the University of Iowa. He is a professor and team leader of MIS at the University of Wisconsin Oshkosh, USA. His areas of research and teaching include software design and development, database development, and developing Web applications.
This case was previously published in the Annals of Cases on Information Technology, Volume 6/2004, pp. 330-351, © 2004.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
368 Howard
Chapter XXIII
LXS Ltd. Meets Tight System Development Deadlines via the St. Lucia Connection Geoffrey S. Howard Kent State University, USA
EXECUTIVE SUMMARY
LXS Ltd., a Toronto software house, has identified high market demand for their proposed new product called Estitherm, a Web-based software tool that supports heat loss calculations for architectural engineers designing structures. Estitherm’s development requires sophisticated Java programming skills, however, and the project stalls when LXS is unable to hire enough additional programmers to be able to meet the development deadlines dictated by competition. Through lucky coincidence, LXS’ chief scientist stumbles onto a pool of Java talent while vacationing on the Caribbean island of St. Lucia. Negotiations follow, a contract is signed and the project is quickly brought to successful completion with the aid of Caribbean programmers, working via the Internet. Similar contract arrangements hold the promise for improved economic conditions in Caribbean nations and can reduce software backlogs for companies in developed nations, but better mechanisms are needed to bring together buyers and sellers of IT services.
BACKGROUND
Operating in Toronto since 1986, LXS Ltd. was founded by Lane Bartlett and David Whitsell, two programmers previously employed by CN Railway. At CN, they had been working on a C-language implementation of a freight tracking system that relied on bar Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
LXS Ltd. Meets Tight System Development Deadlines 369
Figure 1. LXS organization
President Lane Bartlett
Controller Karen Shafer
Trucking Sys. Team Lead Ann Owens
Counsel Les Forneaux
Rail Systems Team Lead Ed Darnell
Chief Scientist David Whitsell
Development Manager Leif Essen
Web Systems Team Lead Trevor Leeson
Marketing Dir. Davis Pineau
Testing Mgr. Leslie Gooden
Production Croft McIntire
code technology. That project bogged down in overruns and was eventually cancelled, but the system’s concepts and algorithms had considerable promise, so LXS was founded to produce and market a version of the rail freight system, which was completed successfully in 1988. The package sold well internationally, and LXS grew rapidly. By 1996 the firm employed about 75 programmers and another 12 people on the support staff, was generating about $26M (Canadian) annually and had successful product offerings in the railway, trucking and warehouse inventory control application areas. Five years later, sales had reached $47M, but the programming staff had only grown to 90 because of the difficulty of finding trained talent in the highly competitive job market. There were an estimated 950,000 unfilled IT jobs in the U.S., and Canada was experiencing similar skilled labor shortages. LXS had added a handful of Web-based applications to its product portfolio, and had organized as shown in Figure 1. As Figure 1 shows, product development was organized by application areas, with the bulk of the work residing in Ann Owens’ Trucking Systems and Ed Darnell’s Rail Systems groups. Each group consisted of about 40 programmers, most of whom worked on supporting the successful C-language software packages that accounted for the overwhelming bulk of LXS’ revenue. A few of the luckier ones in each group were assigned to designing extensions and refinements for future releases of their packages. In 1997, the “Web Systems” group was formed to explore Web technology and to develop some small scale product prototypes. LXS had been slow to recognize the Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
370 Howard
potential of HTML technology because Chief Scientist (and LXS cofounder) David Whitsell was skeptical that the Internet would be able to provide the needed bandwidth cost effectively. By early 1997 it was apparent that Whitsell had been too pessimistic, and LXS found itself trying to catch up with the rest of the industry. As part of this catchup strategy, Trevor Leeson was hired to head the Web group. Leeson had previously been senior programming manager with the Canadian Broadcasting Corporation, which had gone live with one of the first (and best-rated) online Web-based programming guides in the industry. His Web experience included in-depth knowledge of CGI interfaces, PERL and Java, and he was an enthusiastic and visionary cheerleader for Web technology. Since Leeson was brought into LXS as an outsider, he initially was received coldly, understandably, by the programmers who had been assigned to his new group. Quickly, though, his Scot accent, roaring laugh and sense of humor, and almost nutty enthusiasm for the future of the Web won him respect and cooperation within the group. In addition, he proved himself to be a technical wizard, able to write Java code apparently off the top of his head, with no design support, and make it work right the first time. Nobody else in the group was close to this level of Java ability, so Leeson quickly became a respected leader. Initially, the new Web group spent all their time attending courses and seminars in order to “tool up” with HTML and Java. They also were sent to an in-depth Windows NT course to understand the architecture, configuration and support of the Microsoft Internet Information Server, as this was the target systems platform for the server-based Internet applications that they proposed to explore for product development. During this time, Dave Ott, one of Trevor’s senior programmers, played a round of golf with Jason Marks, a neighbor and friend. During the round, Marks talked of his problems at work, where he is an architectural engineer for a large engineering design firm. One of the many steps in designing and then obtaining construction permits for commercial buildings requires careful calculation of the thermal properties of the structure. The outside climate, seasonal variation, room dimensions, wall thicknesses and materials characteristics all have a bearing on the heat loss and gain calculations. Each interior space must be carefully studied and complex calculations performed to assure that adequate BTUs and airflow will be available in both heating and cooling seasons. This process is fairly straightforward — the thermodynamics involved are wellunderstood — but the calculations and analysis are very time consuming. Marks was doubly frustrated because there were at the time, surprisingly, no good PC-based software packages available that automated this design function. Ott, of course, immediately saw this as an obvious software development opportunity. He arranged to meet Marks the next day for dinner, and they talked further. Marks explained that in the engineering design and construction industries, large design firms such as his compete for contracts to design (but not build) commercial structures. These firms provide a complete array of design services, including design aesthetics, functionality and fitness for purpose, structural loading, survivability, code compliance, electrical and plumbing design, permitting, inspection and HVAC (heating, ventilating and air conditioning) design. The customers of these design firms are contracting companies that actually perform the construction, working under the design guidance of the large engineering firms. The customer contractors range in size from the very small (approxiCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
LXS Ltd. Meets Tight System Development Deadlines 371
mately five employees) to very large firms that are not quite large enough, however, to possess their own in-house engineering design functions. In the course of the conversation, both Marks and Ott realized that the real marketing advantage of a thermal design software package would derive not from its use in-house by the large design firm. Instead, Marks proposed to almost literally give away the package to the contractors who purchase design services from his firm. This would enable many of these contractors to do initial heat loss estimating on their own, providing their customers with better cost estimates, faster and more accurately. This gesture would serve as a goodwill mechanism that might bring large-scale design business to Marks’s company in the same way that giving away pharmaceutical samples to medical offices bootstraps business relationships and contracts. The more they talked, the better the idea sounded, particularly given the void of PC-based thermal estimating software presently available to the engineering design industry. Dave Ott then prepared a three-page synopsis of this product opportunity and presented it to his boss Leeson, who immediately passed it up the line. Bartlett and Whitsell quickly saw the potential. After only a two-week market-potential study performed for LXS by KPMG, it was clear that the proposed software product was a winner. The proposed package, which Leeson suggested should be called “Estitherm”, was quickly approved and funded in November of 1997 as a major product development project for LXS. Further, both Marks and the KPMG consultants suggested that the package should be made available as a Web-based application. Contractors would be able to log onto the site and follow a dialog, entering design specifications for their buildings using sophisticated drag-and-drop graphics, and be able to immediately receive a complete HVAC specification set. In return, the firms making Estitherm available would be building customer goodwill and obtaining contact information for all of the contracting firms that used the Estitherm site.
SETTING THE STAGE
In Toronto, frustrated managers at a software house bite their nails because they have a winning product, plenty of funding, but not enough Java programmers to finish the product and beat out their competitors. Two thousand-five hundred miles south, on the Caribbean island of St. Lucia, frustrated managers at a small, new contract programming firm bite their nails because they will soon be laying off much of their young Java programming staff for lack of work. What to do? The solution is obvious, but achieving the needed connection between domestic buyers and overseas sellers of software services is anything but easy. What can be done to eliminate the information technology (IT) skills shortage? The inability of companies in the developed nations to find enough programmers to complete their projects is rapidly becoming a strategic emergency (Blumenthal, 1998; PITAC, 1998). This skills shortfall is so severe that it is said (PITAC, 1998) to be constraining the overall growth of the U.S. macroeconomy. Other developed nations such as Canada are experiencing the same shortfalls. Expanded sources of IT expertise must be tapped. LXS Ltd. is no exception. One of their key projects, Estitherm, runs on the Web and enables quick and accurate estimated calculations of the size and type of heating and cooling
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
372 Howard
equipment needed to satisfy a contractor’s requirements. Demand for Estitherm is high, but the project is nearly a year behind its original development timeline. What to do? Meanwhile, economic and development ministers in the small island nations of the Caribbean are struggling to develop stable and growing economies. They must find a way to break away from the rapidly declining plantation-based agriculture of the last century. They must decrease their reliance on unstable tourist income and reverse the brain drain as their best educated and most talented youth flee to Europe and North America for the lucrative jobs that their island homes cannot provide. What to do? Some Caribbean nations have taken tangible first steps to develop offshore IT business. The St. Lucia National Development Commission has built a 20,000-square-foot IT incubator facility to house programmers and provide advanced software development tools and highspeed Internet access, but most of the facility sits idle, wanting for contracts. How can more business be generated?
CASE DESCRIPTION The Product Team
The Estitherm product development team was formed rapidly. Since it was to be a Web-based product, the team became part of Trevor Leeson’s group. Six programmers were assigned into various roles, as shown in Figure 2.
Figure 2. Organization of the Web Systems team
Web Systems Team Lead Trevor Leeson
Lead Java Programmer Sharon Difford
Java Programmer Vijay Rai
Testing and QC Leslie LeMieux
Java Programmer Vicky Johanssen
Lead HTML Programmer Steve Knessen
J++ Toolsmith Cynthia Davis
HTML Programmer Higechi Wong
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
LXS Ltd. Meets Tight System Development Deadlines 373
Estitherm Architecture
Leeson initiated the Estitherm project with a series of informal team brainstorming sessions soon after the last programmers had returned from their training courses. Initial discussions centered on overall architecture, and the team decided quickly that most of Estitherm would be written as a Java applet. This meant that most of the Estitherm program code would be transferred from the server to the client’s machine when the application was initially invoked on the Web. After the transfer, the program would run on the client’s machine, freeing busy central servers to attend to other tasks. Java was especially appropriate to this task because it is “platform-neutral”, meaning that Java applets would run on almost any computer running almost any browser software. During execution of Estitherm on the distant client machine, the program will make data requests via the Internet back to the server to obtain materials properties and climatological data from the Estitherm support database. This rough overall system architecture appears below.
Timetable and Project Management Scheme
As 1998 began and the rugged Canadian winter hit its worst, Team Leader Leeson found himself in a software development manager’s “dream” situation. He had a clear project objective, a product that they were confident would be successful in the marketplace, strong executive support from Bartlett and Whitsell and a well-trained team
Figure 3. Overall Estitherm architecture
Client Computer
Java Applet Code
Internet
Server Computer
Estitherm Database Queries and Responses
Estitherm Materials and Climatological Database
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
374 Howard
of technically current programmers. Even better, he had the full attention of the team because the rugged January weather eliminated most recreational distractions. They could focus intensively. He drafted an overall project plan, and then modified it in consultation with his staff. The plan called for a working prototype of Estitherm to be ready by September 1998, with all testing and QC complete by late November, and production release at the end of December 1998. The overall plan was submitted to Essen and then to Whitsell, who approved it with minor modifications and authorized the budget as requested. This project management outline appears as Figure 4. The project plan above reflects labor costs only, and allows for 36% fringe benefits costs above the equivalent hourly rate of the technical personnel involved. KPMG’s certification fee was an estimate, and included projected variable expenses above their flat fee for small system certification. In addition to these costs above, another $28,700 was estimated for hardware needed to support the system, and $18,200 for the first-year software licenses. Team Leader Leeson produced a set of basic project management tracking tools, including a Gantt Chart and CPM diagram. The Critical Path Method graphic was not really necessary because it was obvious to everyone at the start that the first-release coding of the Java applet would be the constraining milestone upon which the entire project depended. Leeson and his boss Leif Essen agreed to hold 30-minute progress briefings each Thursday afternoon for Bartlett and Whitsell. Key technical staff were also to be included in these meetings when their expertise was needed. This simple project management methodology was expected to enable immediate detection of slips in the planned development schedule.
Funding
The funding levels shown in the project plan were presented to Development Manager Leif Essen in mid-December of 1997 and quickly approved by him and President Lane Bartlett. Salary costs would be allocated to the project based on actual hours reported each week. Disbursements for hardware and software would be timed as requested by Team Leader Leeson, with the only requirement being a three-day lead time notification to Controller Karen Shafer.
The Project Begins Internal Architecture
Work on Estitherm proceeded rapidly and on schedule. Leeson assigned Java programmers Difford and LeMieux to work with HTML guru Steve Knessen to work out the details of the Estitherm architecture. The resulting scheme was as described earlier. All of the climatological data needed to support heat loss and gain calculations would be obtained from U.S. NOAA international databases and formatted and loaded into a Microsoft SQL Server database that would reside on an Estitherm server. This climatological data included worldwide temperatures, humidity, wind and insolation, in all of their seasonal variations. The Server was planned to run Windows NT release 4.0, with Microsoft IIS actually providing Web and database hosting services. SQL Server would use industry standard ODBC (Open Database Connectivity) protocols to support intercommunication with the HTML applications on the clients. The database would also Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
LXS Ltd. Meets Tight System Development Deadlines 375
Figure 4. Estitherm project plan Milestone
Detailed Architecture Complete
Interface Design Complete
Engineering Algorithms Complete
Program Design Complete Build Server to be Test Bed Applet First Release Coding Complete
Server Database Support Coding Complete Support Complete
Testing Complete
Beta Test
Certification Production Release TOTAL
Description Design the Estitherm database record layout, the Java applet object structure, and the intercommunication routines. Layout and prototype client screens in the thermal design dialog, then prototype and tune the dialog with user involvement Engage licensed professional engineer to assist in algorithm design and validation for all thermodynamics module calculations Design the internal structure for the Java applet routines Install and test and configure Microsoft IIS on server machine to host testing Write all first release Java code for the applet and run it in test mode in local browser Code all routines necessary to support database lookups on server Design and test backups, recoveries, database maintenance routines, logging and audit routines. Build 500 test cases for various building designs; validate all cases Release to selected beta testers for comments and corrections (8 weeks in duration) KPMG Certification Testing and liability validation certification Place in production
Person-Hours
Target Date
Budgeted (CDN) (Labor only)
360
22 January 1998
$16,200
600
5 March 1998
$28,800
180
18 February 1998
$15,300
340
20 March 1998
$17,000
160
17 March 1998
$7,200
1280
17 July 1998
$70,400
410
24 April 1998
$26,650
260
8 September 1998
$10,400
600
16 October 1998
$24,000
40
17 December 1998
$1,800
40
22 December 1998
$16,500
25
29 December 1998
$1,250 $235,500
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
376 Howard
host extensive tables of the engineering properties of materials used in construction. The thermal transmissivity of, for example, an 8 cm-thick layer of brick is considerably higher than that of a comparable thickness of pine. Heat loss through concrete floors is much more rapid than if that floor is underlain with a thickness of sand. All database access events would be logged using the audit and journal capabilities built into Windows NT 4.0, thus allowing troubleshooting and bug fixing to proceed rapidly. On the client side, Estitherm would initially execute by loading HTML code from the server that would present a series of forms to the user. These would be implemented with support from FrontPage, with extensions on the IIS server, and would walk users through a series of input panels requesting initial descriptive data about the proposed building design. After that data had been obtained and validated by editing routines on the server, the Estitherm Java applet would invoke. This highly sophisticated routine would present the user at the client computer with a blank drawing pad and a set of symbols indicating different types of floor, wall and ceiling compositions. The user would then use the mouse to draw out the floor plan of each room in the proposed structure, dragging and dropping materials symbols to each surface after its dimensions had been specified. Microsoft OpenGL standards and tools would be used to build and support these sophisticated graphics routines. As the user specified individual rooms and spaces, those rooms would be shown in a thumbnail graphic at the bottom left of the screen, showing the room-byroom synthesis of the entire structure, level by level. Once the graphic depiction of the structure was complete, the thermal calculations would be executed. The engineering algorithms would be implemented as Java code within the applet on the client’s machine, but several queries to the server database would be needed during this process. These queries would provide the needed climatological and materials properties data for the specified structure and its location. This process was anticipated to require about 30 seconds under normal Internet loading conditions, so the design specified provision of a sliding progress bar to keep the user informed. This architecture design was projected for completion on January 22, but was actually finished on Tuesday the 20th. Leeson reported that happy event in Thursday’s update meeting to management, with smiles all around.
On Time, on Budget!
Similar successes were experienced with the Interface Design, Engineering Algorithm Design and Server Build phases of the project. All three of these project subcomponents met their time and cost targets within 5%. Vijay Rai led the interface design. He started with a series of two-hour meetings with contractor and architectural design personnel in Marks’s architectural design firm. They had agreed to participate in the design of the product in exchange for a free perpetual license to the finished product. After determining together what the initial design screen should look like, Rai used Visual Basic to quickly build semi-working prototypes of the screens for user reaction, comment and redesign. This process quickly netted a usable, slick interface dialog with which all the users were well-pleased. Leeson himself took the lead in getting the algorithm design complete because his undergraduate background was in mechanical engineering. He worked for two days with an HVAC (Heating Ventilation and Air Conditioning) consultant from Black and Veatch Ltd. to be sure that he understand the basic heat transfer equations, and finished the project working with Vicky Johanssen over a period of about three weeks. The result was Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
LXS Ltd. Meets Tight System Development Deadlines 377
a complete set of validated thermal properties relationships that could then be included into the Java applet. Meanwhile, Cynthia Davis, who was not yet needed on the Java portion of the project, ordered two high-end PCs, loaded and configured Windows NT 4.0 Server on both machines, and then configured IIS to support the client development and testing. Two parallel systems were created for reliability. By mid-March of ’98, work on Estitherm was proceeding on track, and it appeared that LXS had a winner. Cynthia Davis, the J++ toolsmith, had installed the Java development tool on all four of the Java programmers’ PCs, and they had successfully completed several small Java test projects from the technical training course they had recently taken. Coding work on the Estitherm applet began in earnest in late March.
Java R (Not) Us
Java programming started out beautifully. Initially, the team focused on writing and testing code to extract user input from the forms. Next, the database queries to the server were coded and tested, and all went smoothly thanks to the easy interfaceability of the ODBC routines. Trouble came suddenly, however, with the graphics routines. The goal was to initially present the users with a room design panel that started as an empty rectangle. The user could then drag the rectangle’s lines in any way necessary to specify the desired shape of a room, and the dimensions would move alongside each line dynamically. After the layout for a room was complete, the user would use simple mouse manipulations to specify the construction characteristics of all surfaces. Programming these graphics routines in Java proved much more difficult than had been expected. The programmers’ learning curve for the difficult vector graphics programming techniques was quite steep. Once the programmers had developed good proficiency, though, the programming process was still very slow because of the inherent complexity of what they were trying to do in the application, and the large quantity of program code necessary to do so. In the April 9 progress meeting, Leeson mentioned that the graphics work was difficult, but expressed confidence that learning effects would enable them to catch up. The following week, in the April 16 meeting, he decided to come clean and confess to Essen and the President that they were two weeks behind at only the third week of graphics programming. No dramatic improvement was anticipated. All concerned had seriously underestimated the difficulty of the graphics programming. As a possible fix, the entire project team dropped what they were doing and met for two hours on the morning of April 17 to explore whether some different, less graphicsintensive user interface might be employed. This possibility was dispatched quickly, though, because it would require users to enter room dimensions numerically, manually, and made it nearly impossible to account for irregularly shaped rooms. Since Estitherm’s goal was to attract customers and win goodwill for the firms who provided it for use on the Web, it was decided that a clunky user interface that angered users would not be acceptable. Searching for a solution, Essen and Leeson met with all four Java programmers the following Monday. They quickly decided that the only way to finish anywhere close to schedule was to hire more help. The programming task and technology were well understood — they simply needed more hands to get it done fast enough. Since coding productivity was running about one-third of that planned, the three Java programmers needed to become nine programmers. Rather than the projected 1,280 hours needed for this phase of the project, roughly 3,840 hours were needed, an increase of some 2,560 Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
378 Howard
hours. At a labor rate of $40/hour, this would add $101,600 to the projected cost of Estitherm, a 42% cost overrun. These numbers were presented to Bartlett and Whitsell who, to everyone’s incredible surprise, agreed immediately. They later confided that they secretly double-budget almost all projects, based on long and rugged experience with software project estimating failures, and were very confident that Estitherm would be a success in the marketplace. To them, a 42% overrun was a small one.
Into the Marketplace
Essen, then, had received authorization to “add capacity” immediately. He contacted several Toronto consulting firms looking for Java programmers who were also familiar with graphics and OpenGL programming. The skills existed, but the lowest consulting billing rate began at $140/hour CDN, clearly an unacceptable number, and even at that rate, it would be at least a three-month wait until six people could be available. Calls to placement firms followed, with the grim news that Java programmers were simply nonexistent in the marketplace. While the staff of three slogged forward on the graphics programming, making steady but slow progress, Essen continued trying to hire more talent. In desperation, he expanded his search to include newspaper ads in major U.S. and Canadian cities, visits to recruiting fairs at McGill University in Montreal (a strong source of computer scientists) and even discussions with colleges about hiring Java programming interns. Obviously, LXS had run squarely against the IT skills shortage — this problem they had been hearing about in the media was real, quite tangible and was directly frustrating their company’s strategic product development plan.
Vacation Time
In mid-April, Chief Scientist David Whitsell left with his wife for their annual escape to warm weather. By this time of year in Toronto, the snow has been on the ground for six continuous months and patience is at an end. Whitsell was worried about leaving in the midst of the Estitherm project crisis, but knew that all that could be done was being done. This year, instead of their usual destination of Key Largo, Florida, Whitsell’s wife had booked a week in the Caribbean on the British Commonwealth Island nation of St. Lucia. They had booked to stay at Le Sport resort on the north end of the island, so their arrival at the Hewanorra Airport on the south end afforded them an interesting taxi tour from one end of St. Lucia to the other. The early part of the cab ride met their expectations of a slow, sleepy, palm-covered island paradise — hilly roads that later gave way to lush and incredibly green banana plantations in the central part of the island. Driving through the main city of Castries was also stereotypical, and the slow traffic and street vendors were no surprise. Emerging from the north side of town, though, David was amazed to see what appeared to be the early stages of a developing technological industry. There were commercial computer contractors, networking vendors, and they even passed one secure bunker-like building ringed with barbed wire and peppered with satellite antennas.
St. Lucia Development Initiatives
What Whitsell and his wife viewed on their ride to Le Sport was far from their expected stereotype of a stagnant, backward tropical nation. Table 1 lists some of the computer-related organizations actually operating in St. Lucia in 1998. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
LXS Ltd. Meets Tight System Development Deadlines 379
Table 1. Real-world IT-related business operating in St. Lucia, 1998 ISIS World
Nicholas Institute of Computer Literacy
Wm Peter Boulevard
Cadet St
Box 1000
Castries 453-7754
Castries, St Lucia
Also at Louisville and Vieux Fort 454-7757
451-6608 Caribbean Computer Literacy Institute Institute of Self Improvement Systems Ltd
Gablewoods Mall
John Compton Highway
Sunny Acres
Castries
Box 3097
452-1300
La Clery Castries
Micoud Computer Learning Center
451-3030
32 Lady Micoud St Micoud
CES
454-0556
San Souci Box 1865 Castries 453-1444
University of the West Indies
453-555 Fax 452-1558
Mome Fortune 452-6290 452-3866
Computer Centre Ltd Hill Twenty Babonneau
MainLANLtd
Box 1092, Castries
(Network administration, consultation
453-5560 FAX 450-6199
/documentation) PO Box 346
University of the West Indies
Castries, St Lucia
School of Continuing Studies
[email protected]
(UWIDITE) Box 306
Business & Technical Services Ltd
Castries
GBTS Ltd
452-4080
49 Mary Ann St Box 1829 Castries 452-4564 FAX 453-1727
The “bunker” Whitsell passed is part of a technology incubator project jointly sponsored by the The World Bank and the St. Lucia National Development Commission aimed at attracting information technology services business to the country (SLNDC, 2000). The following specific initiatives have been taken on St. Lucia to attract IT: •
Construction of a 20,000 sq. ft. facility specially designated as an information processing center. The facility is air-conditioned and can be modified to specification. The structure is divided into four sections, with telecommunication lines all the way up to the doorsteps.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
380 Howard
• •
• •
Negotiations with Cable & Wireless, the St. Lucia telecommunications carrier, to reduce telecommunications rates specific to this industry, resulting in an agreement that rates will be consistent with the more competitive rates in the region. Identification of schools interested in training personnel in the applications needed for the IT industry. In addition, St. Lucia has established a governmentsubsidized training center and maintains a database of potential employees so that they can be easily identified. New legislation has been passed to facilitate easy set up of information servicesrelated businesses. SLNDC has performed identification of local individuals and companies interested in joint venture partnerships with potential investors.
This project was partly funded by $6 million of financing targeted at telecommunications infrastructure improvement in the Eastern Caribbean (Schware & Hume, 1998; The World Bank Group, 1998). The grant project (Schware & Hume, 1998) included funding for vouchers to partially fund training of selected qualified students. It is clear from this project that St. Lucia understands fully the potential of offshore programming (O.P.), has elected to invest significantly in creating attractors to industry and has chosen to attempt to capitalize on the seed investment from this grant. St. Lucia’s project is part of a larger multi-nation effort to attract investment to the Eastern Caribbean that has been spearheaded by the Eastern Caribbean Investment Development Service (2000). Headquartered in Washington DC, this agency of the Organization of Eastern Caribbean States promotes offshore business, and information processing specifically, for Anguilla, Antigua and Barbuda, British Virgin Islands, Commonwealth of Dominica, Grenada, Montserrat, St. Kitts and Nevis, St. Lucia, and St. Vincent and the Grenadines. ECIPS emphasizes political stability, quality labor force, English-speaking tradition, proximity to the U.S. and alignment with U.S. time zones, offering a range of incentives including tax holidays and duty-free entry of equipment and raw materials. Whitsell and his wife enjoyed a couple of days of doing nothing on the beach. But as a high-energy CEO-type, he quickly grew bored and rented a car from the resort so he could try to satisfy his curiosity about the IT activities he saw in town. One thing led to another, and he found himself the next day meeting with a government vice-minister of technology, who explained the incubator project, with a focus on St. Lucia’s desire to cultivate an offshore programming industry. The minister explained that the overall economic picture in the Caribbean is one of stagnation or of very slow growth. Most of the countries exhibit “economic dualism”, where a modern economy is superimposed upon a less advanced system held over from plantation days. Oil and sugar prices are low, the cost of maintaining those oil and sugar infrastructures is high, tourism is capricious and not fundamental to economic growth, urbanization is imposing everincreasing social costs and major investments in manufacturing are not occurring. As a result, nearly all of the educated, ambitious youth of the region are leaving to pursue the superior professional employment opportunities in other parts of the world, most especially in England, Canada and the U.S. For example, in relation to the resident population, the overseas population at the end of the 1980s stood at 40% for both Jamaica and Guyana, 23% for Puerto Rico, 21% for Trinidad and Tobago, 15% for Haiti and 10% Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
LXS Ltd. Meets Tight System Development Deadlines 381
for Cuba (Girvan, 1997). There is an almost desperate need to find a way to stimulate economic growth in the Caribbean if the brain drain and downward spiral of these economies is to be arrested. Good IT technology training is available in St. Lucia and other Caribbean nations, but the students can’t find work and, understandably, depart to developed nations. Whitsell was then given a tour of the incubator facility. He was surprised to find very high-speed Internet access, hardware equivalent to that at LXS, student programmers with excellent advanced technology skills (including Java (!)), immediately available programmers and a wage structure less than one-third of that in Toronto. He ended his vacation visit to St. Lucia with a promise to return soon to try to construct a Java programming contract arrangement.
Striking a Contract
Immediately upon returning to Toronto, Whitsell called a meeting of the president, the LXS Development Manager Essen and several members of the Estitherm technical project team. They responded enthusiastically to the prospect of getting assistance from St. Lucian programmers, and literally drew a straw to determine who would be lucky enough to accompany Whitsell and Essen on a trip to St. Lucia. The following week, Whitsell, Essen and straw-winner Sharon Difford, Estitherm specifications in hand, met with several managers and programmers of a small software contracting firm in the incubator on St. Lucia. Contract terms were agreed to and papers signed for a “time-andmaterials” arrangement at a rate of $16/hour (CDN). Most coordination would be accomplished via the Internet, and the Java code itself could be sent to LXS electronically. Difford stayed behind one more week to coordinate, as seven St. Lucian programmers began coding work on the graphics portion of Estitherm. One of the programmers, Ernest Millston, was a 15 year-old high school student, and Difford was particularly amazed at his skills and energy.
A Clash of Cultures?
LXS programmers were initially concerned about possible cultural differences between their approach to work and that of the St. Lucians. Their stereotype was that people in “that” part of the world are lazy and move slowly, consistent with the universal “No problem” epithet. Most of the stereotype proved, fortunately, to be incorrect. The St. Lucians were agreeable and responsive to inputs from the Toronto-based programmers. After the first week there was, however, a clear indication that the pace of everything on the island is much slower than in the North, and the Canadians had trouble communicating and sustaining a sense of project urgency to the St. Lucians. After a bit of trial and error, Essen found that frequent reminders seemed to work. After each prod, the St. Lucians would accelerate, then on about the third day, again begin to lag. Another prod would yield another surge followed by deceleration. While somewhat frustrating, this arrangement worked and “kept the Caribbean programmers going”. In the project postmortem meetings, a key item in Leeson’s “lessons learned” list was to put one Canadian in place on the island for the project duration in order to keep the work pace high on a daily basis.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
382 Howard
Project Completion
The arrangement worked. Toronto programmers and the St. Lucians divided up the work on the Java graphics modules, working out rough spots via e-mail, Internet chat and an occasional phone call. To the regret of all the Canadian programmers, there was no need to make another coordination trip to the Caribbean — the Internet-based communication was adequate. Java programming was complete on August 7, only about three weeks behind the original schedule, and at a very attractive total cost. Rather than the expected $101,600 overrun associated with using Canadian Java programmers (who were nonexistent in the marketplace), the Java portion was completed with only a $40,960 direct labor cost overrun, thanks to the St. Lucia connection. The balance of the Estitherm project milestones were achieved close to targets, and the system was converted to production only about a month late.
CURRENT CHALLENGES/PROBLEMS
Offshore programming is not a new practice. Indeed, arrangements wherein U.S. firms contract with software developers and technicians in India, Ireland and Pakistan have been in place since about 1985 (Heeks, 1995). This arrangement is mutually beneficial because it provides much-needed employment in the “offshore” nations, improves their balance of foreign credits and aids customer firms in completing stalled or behind-schedule software projects at attractive labor rates (King, 1999). This “offshore programming” activity is now greatly facilitated by the Internet because the product itself is information, which can move about the world with no delay and at no cost. Software specifications can be sent to contractors, and the resulting software products sent back to purchasers with complete ease. For example, Levi-Strauss, based in San Francisco, contracts for programming services with Cadland Infotech Pvt Ltd., Bangalore, India (Cadland Ltd., 2000). Offshore programming is place-displacement work at its best. Table 2 summarizes offshore programming activity worldwide. In the Caribbean, however, the offshore programming industry is small, and is struggling for recognition and a way to build business volume. The most visible offshore programming effort in the Caribbean is centered in Montego Bay, Jamaica. Furman University, Greenville, SC, U.S., is involved with the Caribbean Institute of Technology (CIT), training teachers and leading curriculum design with the objective of building a training infrastructure to produce information technologists in Jamaica (Tracy, 1999). This effort, coordinated via HEART, a Jamaican government agency for technology training, recognizes the potential of offshore programming to stimulate the Jamaican economy, and is pursing that opportunity aggressively. Per capita income for the 2.6 million residents of Jamaica is only $6,000, and the 45% unemployment rate (Davidson, 1999) characterizes the desperateness with which the economy needs opportunities such as O.P. INDUSA Offshore, an Indian software company with offices in Atlanta, is seeking programming customers in the U.S., and an initial contract has been arranged with Realm Information Technologies (Atlanta). Realm is currently contracted for thousands of hours of programming work with Indian software firms, and will shift much of that business to Jamaica as soon as the training is complete and the first crop of Jamaican programmers comes online. Edward’s Fine Foods, the Southern Company (an electric utility) and Centris Insurance are also carefully studying signing on as customers Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
LXS Ltd. Meets Tight System Development Deadlines 383
Table 2. Dollar value of offshore programming exports by nations (Heeks, 1995) Country
Year of Data
Exports (USD, $M)
Growth Rate
Ireland
1990
185
38%
India
1990
120
34%
Singapore
1990
89
43%
Israel
1990
79
39%
Philippines
1990
51
32%
Mexico
1990
38
30%
Hungary
1990
37
53%
Russia
1993
30
N/A
China
1990
18
43%
South Korea
1990
15
40%
Taiwan
1987
11
48%
Egypt
1994
5
N/A
Argentina
1990
4
N/A
Chile
1990
2
98%
Cuba
1993
1
40%
(Davidson, 1999). There are other early efforts at building offshore programming activity in Barbados, Trinidad and Tobago, Cuba, Antigua and St. Lucia. The St. Lucian effort, described earlier, has resulted in construction of a sophisticated telecommunications facility to serve as a nucleus for offshore programming on that island. It is in this facility that work on the fictional LXS programming contract began. The contact between LXS and St. Lucia occurred through serendipity. Had not Whitsell’s wife booked a vacation to St. Lucia, this arrangement would not have come about, to LXS’, Estitherm’s and St. Lucia’s mutual loss. Much is to be gained by finding ways to bring together companies like LXS, who desperately need additional IT skills, with nations of the Caribbean, who possess the skills and badly need the revenues. What is needed to promote these mutually beneficial offshore programming arrangements? Both LXS and St. Lucia were faced with significant challenges. LXS’ strategic product development plans were being frustrated by a scarcity of Java programmers. St. Lucia, and the Caribbean generally, is frustrated by the loss of collegetrained technicians to other nations because there is not work for them at home. Clearly, the main driver of O.P. is a supply-demand imbalance on the world IT skills market. One nation, usually a more developed one, has a shortfall of available IT skills, and another nation, usually a Third World one, has a skills surplus. Thus, from a macro perspective, O.P. activity can be viewed as a classic economic market-clearing operation, matching demand and supply at some agreed price. If this supply-demand imbalance ever disappears, the entire economic rationale for O.P. is also likely to disappear. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
384 Howard
Figure 5. Proposed supply/demand model for offshore programming Awareness of the possible availability of offshore programming services Executive support for offshore programming Backlog of programming projects
Seats available in formal programming training programs BROKERS Company engaged in active searching
PLACEMENT FIRMS
Price advantage of offshore programming Availability of non-mission critical programming projects
INTERNET
Existence of specification packagers
INTERNATIONAL CONSULTING FIRMS
Programming providers engaged in active searching
Number of industry certifications in the country Skillsets available
Existence of capability packagers
Depth of expertise in each skillset Availability of internet infrastructure Number of IS consulting firms in existence
Packagability of programming projects
DEMAND CATALYSTS DEMAND FOR OFFSHORE PROGRAMMING SERVICES
Number of graduates of formal programming training programs
SUPPLY CATALYSTS MARKET-CLEARING CHANNELS
Proactiveness of government support
SUPPLY OF OFFSHORE PROGRAMMING SERVICES
What factors influence the demand for offshore programming services such as those needed by LXS? What prompts a nation such as St. Lucia to try to establish a supply of offshore programming services? How do these buyers and sellers find each other? How will companies all over the globe benefit from an understanding of the kinds of offshore programming services available in the Caribbean and elsewhere? What systematic methods should be put in place to help customers and providers strike arrangements such as that at LXS? Figure 5 shows a proposed model of factors and effects that may influence the origination of offshore programming arrangements such as the one that benefited LXS. This is a hypothetical model, deduced from real-world experiences, and is offered to spark discussion about all aspects of O.P. activity. Demand for programming services must exist in excess of the domestic supply. As mentioned earlier, it is this imbalance that drives O.P. activity. Demand requires a backlog of IT work in the services-purchasing nation, a price differential between domestic and offshore programming services, and strong executive support for the concept in corporations in the purchasing nation. Because many executives are uncomfortable entrusting mission-critical projects to the as yet only a partially developed O.P., industry, a supply of projects that are important but not survival-critical can be expected to heighten the demand for O.P. Finally, projects that are well-specified, tightly bounded in scope and cleanly packaged are much more likely candidates for being contracted overseas. In situations where either there are few well-packaged projects, or where managers simply do not have time to “shop” the world for O.P. contractors, specification packager intermediary firms can catalyze O.P. activity. These firms, which can be domestic or overseas, intervene in the early stages of an O.P. activity, working with the buyer to clearly package and bound a project and then to find an appropriate overseas contractor. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
LXS Ltd. Meets Tight System Development Deadlines 385
Only a few “specification packaging” organizations exist at present, and as more evolve, O.P. activity can be expected to grow. At the center of the model are the market-clearing mechanisms that might unite buyers and sellers. At present, much of this is done by large consulting firms, such as KPMG, Accenture and others that are building practice areas in O.P. In addition, a small but growing number of brokerage houses are coming into existence for the sole purpose of pairing buyers of O.P. service together with suppliers. These firms also often assist in the construction of contract terms, the monitoring of project activity and provide troubleshooting support as needed. On the supply side, one can also envision a small catalyst company that functions as a “capability-packager”, fronting for one or more offshore programming firms, or for entire nations. These packagers would have current information on the technical specializations, supply depth and performance records of O.P. firms in supplier nations and thus lubricate the process of locating buyers for services. No such companies have yet been identified, but as O.P. practice grows, the birth of “specification packagers” is hypothesized by the model. Finally, at the right hand side of the model, supply of O.P. services is obviously a function of the number, quality and productivity of educational programs in IT, the alignment between skillsets needed and skillsets available, connectivity to a particular nation and the attitudes of the host nation governments. Governmental imposition of tariffs on transnational data flows, for example, could greatly inhibit O.P. activity in a particular nation. Critique this model. What factors in the model seem to be credible? Which do not make sense? What would you add to this model? The LXS experience was a successful one. What can go wrong in offshore programming activities? List and discuss the pitfalls for both buyers and sellers of O.P. Are these risks strategic, in that they are potential showstoppers for O.P. as a global activity? Are there legal and public policy steps that can be taken to minimize these risks? What agencies, national and global, should take an interest in assuring the success of O.P.? Research can answer many of the open questions about offshore programming. Work is needed to characterize the kinds of conditions within a software development company that are most likely to prompt a search for an offshore programming solution. More knowledge is needed about the various contracting structures that are employed between O.P. buyers and sellers, and which are most successful. Research on the marketclearing mechanisms are needed — are brokers required, and do brokerage arrangements lead to successful contracts? Are there privacy, tariff, tax or public-policy issues for nations wishing to build O.P. activity? There is no shortage of unanswered questions.
CONCLUSION
LXS’ story is typical. Organizations all over North America are experiencing acute shortages of trained information technologists. More than just an inconvenience, these shortages are preventing the timely completion of projects that are often central to strategic organizational goals. In some of these situations, affordable overseas pools of IT talent can be a viable solution, but only if the companies know to look overseas for Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
386 Howard
help. LXS got lucky — they found a source of help in St. Lucia by accident, and were then clever enough to take advantage of that happy accident to complete their project. This is a key message — there is help for beleaguered managers of software projects, but the provider nations must make their availability much more widely known. IT activity is rapidly “going-global”. Today, offshore programming and contract activities such as this LXS project are isolated exceptions. Unless unexpected showstoppers appear, there is every likelihood that systems project teams will routinely and predominantly span national boundaries in the near future. IT managers need to prepare now for the cultural, language, security, economic and quality issues that will accompany the global development teams of the future.
REFERENCES
Blumenthal, H. (1998). Ready to get your degree in IS? Netscape Enterprise Developer. Retrieved from http://www.ne-dev.com/ned-01-1998/ned-01-enterprise.t.html Davidson, P. (1999). Jamaica’s silicon beach? USA Today. Retrieved from http:// usatoday.com/life/cyber/tech/ctg112.htm Eastern Caribbean Investment Development Service. (2000). Retrieved from http:// www.ecips.com/descript.htm Girvan, N. (1997, February). Societies at risk? The Caribbean and global change. Paper presented at Caribbean Regional Consultation on the Management of Social Transformations (MOST) Program of UNESCO, Kingston, Jamaica. Retrieved from http://mirror-japan.unesco.org/most/girvan.htm Heeks, R. (Ed.). (1995). Technology and developing countries: Practical applications, theoretical issues. London: Frank Cass & Co. Ltd. King, J. (1999). Exporting jobs saves IT money. Computerworld. Retrieved from http:/ /www.computerworld.com/home/print.nsf/all/990315968E President’s Information Technology Advisory Committee (PITAC). (1998). Interim Report to the President, National Coordination Office for Computing, Information and Communications. Retrieved from http://www.ccic.gov/ac/interim Schware, R., & Hume, S. (1998). Organization of Eastern Caribbean States Telecommunications Reform Project (Project ID 60PA35730). The World Bank Group. Retrieved from http://www.worldbank.org/pics/pid/oecs35730.txt St. Lucia National Development Commission (SLNDC). (2000). Information services industry. Retrieved from http://www.stluciandc.com/info.htm The World Bank Group. (1998). World Bank finances telecommunications reform in the Eastern Caribbean (News Release No. 98/1798/LAC). Retrieved June 4, from http:/ /www.worldbank.org/html/extdr/extme/1799.htm Tracy, A. (1999). Pushing to put Jamaica on the high-tech map. Business Week Online. Retrieved from http://www.businessweek.com/bwdaily/dnflash/july1999/ nf90719a.htm
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
LXS Ltd. Meets Tight System Development Deadlines 387
FURTHER READING
Caribbean Group for Cooperation in Economic Development (CGCED). (1998). Workers and labor markets in the Caribbean. The World Bank Group. Available at http:// wbln0018.worldbank.org/external/lac/lac.nsf/c3473659f307761e852567ec0054ee1b/ a7df291a495614df852567f20063b23f?OpenDocument Davidson, P. (1999). Jamaica’s silicon beach? USA Today. Available at http:// usatoday.com/life/cyber/tech/ctg112.htm Girvan, N. (1997, February). Societies at risk? The Caribbean and global change. Paper presented at Caribbean Regional Consultation on the Management of Social Transformations (MOST) Program of UNESCO, Kingston, Jamaica. Retrieved from http://mirror-japan.unesco.org/most/girvan.htm Heeks, R. (Ed.). (1995). Technology and developing countries: Practical applications, theoretical issues. London: Frank Cass & Co. Ltd. Heeks, R. (1996). India’s software industry: State policy, liberalization and industrial development. India: Sage Publications. McKee, D., & Tisdell, C. (1990). Developmental issues in small island economies. New York: Praeger Publishers. Schware, R., & Hume, S. (1998). Organization of Eastern Caribbean States Telecommunications Reform Project (Project ID 60PA35730). The World Bank Group. Available at http://www.worldbank.org/pics/pid/oecs35730.txt
Geoffrey S. Howard studies offshore programming in small island nations, telecommuting, computer anxiety and the diffusion of innovation. He accumulated 15 years of industry experience in electrical engineering before coming to Kent, and has published in Decision Sciences, Communications of the ACM, The Computer Journal, and other journals. He was selected twice as one of the top 10 teaching professors at Kent State University, and has been awarded numerous Mortar Board prizes for teaching excellence. He was winner of the Paul Pfeiffer Award for Creative Excellence in Teaching. Dr. Howard is a registered professional engineer in the State of Ohio.
This case was previously published in F. Tan (Ed.), Cases on Global IT Applications and Management: Successes and Pitfalls, pp. 31-55, © 2002.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
388
Kisielnicki
Chapter XXIV
IT in Improvement of Public Administration Jerzy Kisielnicki Warsaw University, Poland
EXECUTIVE SUMMARY
Bialystok City Hall is an organ of public administration. The city of Bialystok has 280,000 inhabitants. In result of the political transformation in Poland, the new authorities have inherited a bureaucratic and inefficient management system as well as an outdated IT. In the electoral programme for 2000-2004, the following objectives have been set for the City Hall: to significantly improve the quality of operations and, in particular, to reduce time of handling affairs; to provide complex and professional customer service; to improve the management of assets. In order to improve the City Hall management system, re-engineering and TQM rules have been applied. The new management system has been based on new IT solutions, including extranet network and integrated database. In consequence of those changes, some significant results have been achieved, for example, an improvement of the quality of customer service and also a possibility to monitor the City Hall operational procedures. The vital result however, was a reduction of the decision-making time by the average of 30% and the reduction of the routine affairs handling time by the average of 25%.
BACKGROUND INFORMATION ON THE PROBLEM
The case regards the issue of the IT role and its application in the improvement of the quality of operations of the Bialystok City Hall which serves one of the biggest cities in Poland as well as the regional capitals of Podlasie region. It is based on experiences gained during the development of the IT system (MIS) for public administration purposes.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
IT in Improvement of Public Administration 389
The basic objectives of the presented CASE, besides training purposes, are: • •
To prove that the improvement of the public administration management system can be achieved only through IT. To show that application of IT allows, for the sake of improvement of the management process, to use such advanced organisational methods as re-engineering and TQM.
Most of the existing analyses relate to re-engineering and TQM application in business organisations. Here, our objective is to prove that they can be successfully applied to improve public administration operations. Within the Polish Public Administration, there is a three-level system of management, that is, a voivodship level (Poland is divided into 16 voivodships), a county level and a gmina level. Bialystok is the capital of Podlaskie voivodship. It is located in the Northeast part of the country. It has about 280 thousand inhabitants. The Bialystok City Hall is in charge of, among the others, public finances, public health care, public security, as well as public education and transport. The organisational structure of the City Hall before the organisational transformation is presented in Appendix 1. In 2001 (according to the plan), the Bialystok City Hall will have at its disposal: a revenue of 488,676 thousand zloty while the projected expenses amount to 543 051 thousand zloty (1USD = 4,02 PLN — according to the National Bank of Poland exchange rates of April 18, 2001). The analysis of the Bialystok City Hall management system conducted in 1998 exposed the following: the IT system in use is very much outdated, there are numerous gaps to be filled and the existing IT resources are not being used appropriately. At the time of the analysis, all the data had been traditionally gathered on paper or on the independent, not connected into a network, computers. This situation complicated the City Hall’s operations and made it very difficult. IT in the form of a PC had only been used as a tool to write letters and regulations. It was also used to access very simple databases. In consequence, there was no integrated IT system to service the Bialystok City Hall. Thus, the analysis concluded that such an integrated IT system was vital in ensuring an efficient flow of data and documents between the City Hall’s organisational units and it is also of utmost importance for overall citizen (customer) services. There had been no unification of data in the field of a diversified environment of information protection either. The analysis of the City Hall organisational system showed enormous diversity in the management system as such; 12 people or organisational units reported directly to the City President, while there were only two or three people reporting directly to some members of the City Board. (The literature on the subject recommends five to seven people or units as an optimum for those managerial levels.) The city inhabitants had been grossly dissatisfied with the City Hall work. Their dissatisfaction was documented by: • • •
Numerous complaints on the length of time spent to handle various affairs; Long queues in front of individual desks; Critical articles in the local press on the City Hall work as well as on individual departments and the people responsible for an efficient working system;
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
390
Kisielnicki
•
The fear of the party coalition in power as to the results of the coming elections (the coalition took part in the previous elections under the banner promising to improve the existing management system in the city).
SETTING THE STAGE
In 1998, the newly appointed local authorities, in order to improve the Bialystok City Hall operations, began their work to change the existing management system. The statement made by one of the party leaders, “If we do not improve the City hall operations, we may not survive until the next elections”, has best illustrated the importance of the problem. On the basis of the users’ needs analysis, which included the City hall authorities, clerks and Bialystok inhabitants, it had been concluded that a new management system should be based on the options provided by the IT and it should meet the following criteria: • • • • •
Improvement of the City Hall organisational structure and management methods in the aspect of an integrated IT system for the entire City Hall with clearly defined hierarchy and links between all the organisational units; Efficient flow of information in the City Hall within the newly defined organisational structure; Diversified quality and safety of servicing the institutions in which the IT system is being installed; Easy adaptation and an increase of service function of the IT system to meet the increasing needs and requirements; Fulfilling open system requirements — X/Open standard — which guarantee system compatibility of the existing and future hardware and software.
In order to improve the management system and in order to develop a new IT system, the following methodology was applied: 1.
Re-engineering attitude supported by TQM methods. This attitude recommends sudden and significant changes. In order to introduce those changes, the management system is being analysed in terms of the following criteria: • • •
An increase of co-operation between individual organisational units of the City Hall; A reduction of intermediate stages in the task realisation process, that is, maximum elimination of indirect links; An integration of those organisational units which perform similar functions.
Thus, a typically processor-type attitude has been applied in this case. It focused on the improvement of the management system process. In order to significantly improve the quality of the citizen service, the re-engineering method was supplemented with the rules applied in TQM method. The application of TQM methods results from the objective to ensure that the inhabitants receive well-justified decisions. It aimed at the reduction of the number of appeals. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
IT in Improvement of Public Administration 391
2.
Integration of computer systems with the IT methods. Before the choice was made, several variants of IT solutions had been considered. The basic variants were as follows: • •
Improvement of the existing computer system, that is, an extension and modernisation of the existing PCs and linking local databases through a local area network. Construction of extranet-type network connected to the Internet and winding-up local databases in favour of one major database.
The analysis of cost and results of individual variants was extremely difficult, mainly due to problems with estimating the results of the IT application in public administration. It was also difficult because of the existing regulations regarding cost registration which are still not adjusted to the management accounting system requirements. On the basis of the existing and available data and estimates of both cost and results, it has been concluded that the first solution requires about 60% less investment resources than the second one. However, the conducted SWAT analysis showed clearly that the second solution offered more prospects for the future and could ensure more feasible realisation of the electoral postulates. This attitude was supported by the recommendation to create, subject to available financial resources, data warehouses. A data warehouse is treated as a complete repository of data created on the basis of the transaction systems already in existence in the City Hall and on the basis of the outside IT systems such as banks, statistical office and public records. The choice of solutions, which enable to benefit from the data warehouse, is justified by the fact that it ensures an immediate access to information required by the user. 3.
Creation of the Function Centres within the City Hall organisational structures. This attitude is similar to the methods applied in business organisations where Profit Centres have been created. Mintzberg, who talks about creating of the socalled Hubs, also recommends a similar attitude.
CASE DESCRIPTION The Procedure of Introducing Changes (Basic Phases)
The changes which aim at the improvement of the Bialystok City Hall management system have been developed in the following phases: 1. 2. 3. 4. 5.
Defining the problem. Analysing information needs of the City Hall. A project of the new management system for the Bialystok City Hall. A project of the IT system to support the improved management system. Implementing changes and change evaluation.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
392
Kisielnicki
Description of the Individual Phases of the Procedure
The issue of the improvement of the Bialystok City Hall management system presented in “Setting the Stage” was determined on the basis of co-operation of the project designers and the City Hall employees. The analysis of the City Hall’s IT needs was an iterative process prepared on the basis of the following sets of documents: 1. 2.
Organisational documentation prepared by the Bialystok City Hall employees. The documentation covered, among the others, analysis of the character of acts and resolutions and also the City Hall regulations. Reports prepared by individual organisational units of the City Hall on the links of the specific organisational unit with other units of the City Hall and also on the citizen service system. The objective of the prepared materials was to identify and evaluate the level and strength of connections between the City Hall individual organisational units. These materials served as the basis of the developed data flow diagrams (DFD). Interviews and discussions supported written materials with the City Hall managers and the representatives of the individual City Hall units.
The analysis of needs, as previously stated, was conducted in iterative way. In the first stage, the working hypothesis on information needs had been developed and project tasks had been determined on the basis of source materials delivered by the City Hall. Then, a number of interviews had been conducted and number of discussions with the appropriate representatives of the organisational units attended. The data required to answer two vital questions was obtained: • •
What type of information do you pass to other City Hall units? What type of information do you need to obtain, from other City Hall organisational units, in order to operate properly?
On the basis of this data, taking into consideration all the materials gathered previously, the appropriate conceptual models regarding individual operating procedures of the City Hall have been developed.
The Project of the New Management System for the Bialystok City Hall
The conducted analysis served as the basis to suggest a new organisational structure of the City Hall based on the functional centres. On the basis of the conducted analysis and the applied methodology, the project of an improved City Hall organisation takes the following shape: • • • • • •
Centre for Securing the City Hall operations Centre for Finance and Administration Centre for Social Affairs Centre for Social Infrastructure Centre for Technical Infrastructure and its Management Security Centre
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
IT in Improvement of Public Administration 393
Two other centres were proposed to be created in the future (2002-2004): • •
Information Centre Centre for the City Development Strategy
I would like to stress that social problems are the main concern of two centres, namely the Centre for Social Affairs and the Centre for Social Infrastructure. It is a direct result of the fact that social issues are treated very seriously in Bialystok and also that the handling of current social problems is not connected to the issue of managing resources allocated to this field. The list of organisational units within the new organisational structure is presented in Appendix 2. Appendix 3 presents mutual connections of the Centres. The new organisational structure approved by the appropriate City authorities has the following advantages: •
•
Concentration by the similarity of the performed functions allows close linking of the appropriate organisation. In consequence, it allowed development of an overall policy of the City Hall authorities and also ensured much quicker and more efficient citizen and organisation services. Even allocation of tasks which ensures more effective monitoring of the citizen service system than the one existing so far.
Within the applied methodology, the changes are to be introduced constantly and the improvement of the management system is to be a constant objective. Thus, the recommendations for the future directions (for the period of 2002-2004) in the Bialystok City Hall organisational improvement have now been determined. These include: •
•
In order to meet the present and future city Hall information needs, it would be advisable to create a special organisational unit in charge of the overall introduction of the information technology in the Bialystok City Hall. I would suggest creating the Centre for Securing the City Hall Operations — a department of the City Hall Information Services which should later transform into the City Hall Information Centre in charge of the overall information flow within the City Hall and in the field of the City Hall and a citizen, as well as in the field of the City Hall and the Outside Units including the Council and Gmina organisations. Improving effectiveness of the local authorities will require much stronger assistance than the one provided at present. I also suggest creating a Centre for the City Development Strategy in the future. This suggestion results from the need to separate operation and tactical management issues from the strategic issues. The Centre will focus on the future model of the City of Bialystok through development of overall forecast connected to such issues as public transport, education, health care, and so forth.
The new organisational structure is a very modest one. This is to be considered an advantage as no additional organisational units are being created except for those absolutely necessary in order to fulfil tasks undertaken by the City Hall.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
394
Kisielnicki
A Project of the IT System to Support the Improved Management System
On the basis of the new organisation, the IT solutions suggested will support the new management system. The developed information system takes into account a new functional division of organisational units. It was based on the functional modularity of the system: each of the seven centres has been allocated an information system module marked with the same number. It means that such a module creates a unified group of functions supported by the computer processing and electronic exchange of data (EDI). The life span of the newly created IT system, due to fast ageing of the IT, is about eight years. The basic assumption in creating the system was an integration of data at the logical level. It ensures access to the unified data in all the utilised applications. As a result, the requirement of common hardware and software platform had to be fulfilled even in the environment of diversified data safety. The system also allows for a certain leeway for system modification and further development in line with new needs and requirements arising during the use of the IT system. The basic development tool for the analysis of the data flow diagrams in the system at the level of the introduced centres are DFDs developed in the Upper CASE IT tools style. These diagrams, presented in Appendix 4, made it possible to create a unified IT system.
Implementing Changes and Changes Evaluation
The improvement of the existing City Hall management system along with the supporting IT system has been implemented in phases. At present, the basic IT modules have been implemented. These modules service individual centres which are linked together through a MAN-type computer network called BIMAN. This network is connected to the Internet. There are extranet-type networks operating in the City Hall. The Steering Committee, headed by the City President’s Attorney, is monitoring the implementation and development of the IT system in the City Hall. It can be assumed that the system, in its basic shape has already been implemented and from mid-2000 it has began operating. It is currently being developed and modified in accordance with the reengineering rules. The vital rule to be followed within the implemented IT system was the requirement of a common hardware and software platform where the software project adjusted to the proposed organisational structure and data flow must precede the computer hardware solutions regarding servers, workstations, structural wiring, and so forth.
Problems Facing the Organisation — Remarks on Project Realisation — Introducing Organisational Changes within the City Hall Organisation and Development of the IT System
Realisation of the project required close co-operation of many project teams. It is very difficult to determine the return on investment period, that is, ROI. In the public administration organisations, the most important results are those visible on the outside, that is, shortening of the customer servicing time. Those results have been estimated on the basis of the specially designed questionnaires. There are no such categories as, for Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
IT in Improvement of Public Administration 395
example, profit or share value, in the organisation under analysis. Those categories exist only in business organisations. The investment outlays for the IT will be compensated by shortened decisionmaking time, easiness of monitoring the activities and fast creation of work teams for complex problem solving with parallel lack of arguments on competence and authority. The City Hall Management, after the first year of the system utilisation, listed the following results as the most significant: •
• •
Shortening of decision making time regarding the citizen issues, such as for example: issuing a driving licence or a passport, probate matters, permits to build houses (the estimated time for consideration of those issues was shortened by up to 30%); Ease in monitoring the individual employee and team activities which resulted in the reduction of claims by 20% in comparison with the previous period; Fast creation of work teams for complex problem solving with parallel reduction of significant arguments on competence and authority. It is believed that the success of project implementation had also depended on:
• •
Training of the IT system users which ensured correct usage; Work of the Steering Committee, which headed the project and directly monitored the works in progress at individual Centres and Departments (the role of a Steering Committee was played by the Computer Technology Department of the Bialystok City Hall).
The British experts from Cranfield School of Management estimated that, in the first half of the 1990s, more than 70% of the attempts to re-organise institutions by reengineering in Great Britain ended in failure. Why then has the project presented by us been a success? I think, it results from the fact that our project designers co-operated closely with the City Hall employees. However, we shall be able to talk about full success only when the IT project is fully implemented and tested. We can talk about such full implementation and testing not sooner than 2002-2004.
REFERENCES
Grochowski, L., & Kisielnicki, J. (1999). Reengineering in upgrading of public administration: Modelling and design. International Journal of Services Technology and Management, 1(4), 331-339 Hammer, M., & Champy, J. (1993). Reengineering the corporation, A manifesto for business revolution. New York: Harper Business. Hammer, M., & Staton, S. A. (1995). The reengineering revolution. Harper Revolution. Kisielnicki, J. (1999). Reengineering: problems with theory and practical application. In BIS’ 99 in: 3rd international Conference on Business Information Systems (pp. 191-202), Poznan, Poland. London/Berlin/Heidelberg: Springer-Verlag. Kisielnicki, J., & Sroka, H. (1999). Systemy informacyjne biznesu [Business information systems]. Warszawa: Placet. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
396
Kisielnicki
Mintzberg, H., & Van der Heyden, L. (1999, September-October). Organigraphs: Drawing how companies really work. Harvard Business Review, 87-94. Yourdon, E. (1996). Wspóczesna analiza strukturalna [Modern structured analysis]. Warszawa: WNT.
FURTHER READING
Andrews, D. C. (1995). Enterprise reengineering, the electronic college of process innovation. Available at http://www.c3i.osd.mil/bpr/ Caudle, S. L.. (1995). Reengineering for results: Key to success from government experience. Washington, DC: National Academy of Public Administration. Carr, D. K., & Johansson, H. J. (1995). Best practices in reengineering. New York: McGraw-Hill. Davenport, T. H. (1993). Process innovation, reengineering work through information technology. Boston: Harvard Business School Press. Hammer, M., & Champy, J. (1993). Reengineering the corporation, A manifesto for business revolution. New York: HarperBusiness. Laudon, K. C., & Laudon, J. P. (1999). Management information systems, organization & technology in the network enterprise. New Jersey: Prentice Hall. Laudon, K. C., & Laudon, J. P. (2000-2001). Essential of management information systems. New Jersey: Prentice Hall. ProSci Study Report. (1999). Future role of IT in reengineering. Available at http:// www.prosci.com/IT99.htm Senn, A. J.( 1995). Information technology in business — Principles, practices, and opportunities. New Jersey: Prentice Hall. Stair, R. M. (1992). Principles of information systems, A managerial approach. Boston: Boyd&Fraser Publishing Company.
Appendix 1 Previous organisational structure of the City Hall: The City President: City Council Office (functionally dependant — in the field of human resources it reports to the President, but substantially it reports to the President of the City Council), Spokesman, the Team of Legal Advisors, Department of Geodesy Land Management and Agriculture, Municipal Inspectorate of Civil Defence.
I Vice President
Department of Physical Education, Department of Health, Department of Culture, Registry Office.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
IT in Improvement of Public Administration 397
II Vice President
Department of Architecture, Spatial Management and Environment Protection, Department of Communal Management, Municipal Guard, Department of Computer Technology.
III Vice President
Department of Social and Economic Policy, Department of Constructions and Investment, Department of Public Transport.
1st Member of the Board
Department of Housing Policy, the Board of Communal Property.
2nd Member of the Board
City Board Attorney for Public Commission, Municipal Centre for Social Assistance, Daily Social Assistance House, Social Assistance House.
City Secretary
Organisational Department, Administrative and Economic Department, Department for Citizen Affairs.
City Treasurer
Finance Department, Department of Books and Accounts.
Appendix 2 Present organisational structure of the City Hall: •
Centre for Securing the City Hall operations
City Council Office, Organisational Department, City Hall Information Department (after transformation of the Department of Computer Technology), Team of Legal Advisors, Spokesman. •
Financial and Administrative Centre
Department of Finance, Department of Books and Accounts, Administrative and Economic Department, City Board Attorney for Public Commission. •
Centre for Social Affairs
Department of Citizen Affairs, Department of Social and Economic Policy, Registry Office.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
398
Kisielnicki
•
Centre for Social Infrastructure
Department of Culture, Department of Physical Education, Department of Health to which the following would report: Municipal Centre for Social Assistance, Daily Social Assistance House, Social Assistance House. •
Centre for Technical Infrastructure and its Management
Department of Architecture, Spatial management and Environment Protection, Department of Geodesy Land Management and Agriculture, Department of Constructions and Investment, the Board of Communal Property, Department of Public Transport, Department of Housing Policy. •
Security Centre Municipal Inspectorate of Civil Defence, Municipal Guard.
Appendix 3 Information Exchange based on Functional Centres at Municipal Offices.
Jerzy Kisielnicki is a professor and the head of the Department of Management Information Systems at the Faculty of Management of Warsaw University. He specialises in organisation and management and, in particular, in: systems analysis, management information systems (IT), process innovation (re-engineering), strategic management, transition systems organisation and management in market economy. He is the author of numerous projects developed for the government and various companies. He is also a member of the Institute for Operations Research and the Management Science TIMSORSA and IRMA (representative for Poland).
This case was previously published in the Annals of Cases on Information Technology, Volume 4/2002, pp. 131-140, © 2002.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Library Networking of the Universidad de Oriente 399
Chapter XXV
Library Networking of the Universidad de Oriente: A Case Study of Introduction of Information Technology Abul K. Bashirullah Universidad de Oriente, Venezuela
EXECUTIVE SUMMARY
The Universidad de Oriente was founded in 1958 and structured in five campuses, located in five different states in the south northeastern region of Venezuela, with a current total enrollment of 43,000 students and 200 teachers. A total of 20 libraries of different kinds manually served these students and professors until 1999. To introduce new information technologies to the libraries and all laboratories of the university, the intranet of the university — with 32 networking systems — was introduced for all campuses with the technology of Main Frame Relay. Automation services of libraries were introduced with Alejandria, a locally produced software in effect since 2001. The challenging job is to create consciousness about information literacy. Creation of university digital databases and digitalization of valuable documents are in progress.
ORGANIZATION BACKGROUND
The south northeastern region of the country, belonging to five states, which comprise over 40% of the national territory (Figure 1), did not have any higher educational institutions to offer professional courses. The people were mostly fishermen in the coastal areas, small farmers in the central region, or miners in the south. Most younger generations with or without primary or secondary education used to follow the parental profession to earn their livings. Few exceptional younger people from well-to-do families pursued higher education in Caracas, the capital of the country. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
400
Bashirullah
Figure 1. Map of Venezuela
U.D.O.
Immediately after installing a provisional democratic government on November 21, 1958, the government decreed to create the Universidad de Oriente (UDO), one university for five states, to promote and develop economic, educational, and cultural progress in each of these states. The newly appointed rector introduced the centralized campus system (or Nucleo), a new experimental educational system in the country. Each campus initiated the academic activities with a specialized faculty, in accordance with the characteristics of the land and culture of the region. The first inaugural class started on October 12, 1959, in Cumana, the headquarters of the new university, with 120 students and nine teaching staff. As of 2002, the university had an enrollment of more than 43,000 students, over 2,500 teaching staff, and over 5,000 administrative and supporting personnel. The five initial campuses had grown by another five sub-campuses by mid2002 to meet the demand of the local students. The university offers graduate, undergraduate, and some diploma courses in all branches of science, technology, and humanities, except Law. An evaluation committee reported in 2001 that the UDO achieved the original objectives in bringing cultural changes in the region. The university is completely financed by the Central Government. Students pay less than US$1 per semester for inscription, and receive a well-balanced lunch and dinner on campus for less than US$.01 per meal. The UDO could not keep pace with the outside world in modernizing laboratories and libraries with the new information technology due to the devaluation of 37.2% of local currency in 20 years; also, the financing from the Central Government did not match the devaluation. The university is centrally organized, headed by the rector and three vice rectors (academic, administrative, and secretary). These posts are elected every four years and Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Library Networking of the Universidad de Oriente 401
Figure 2. Flow chart of the university Junta Superíor Consojo
Comisión
Rector
Vicerrectorado
Vicerrectorado
Tribunal
Contraloría
Fundación
Secretaria
Decanatos • Núcleo de Anzoátegui • Núcleo de Bolívar • Núcleo de
are supported by a series of administrative divisional heads, whose functions are to carry out the university policy and electoral mandate of the authorities. The five principle campuses are headed by an elected dean, who carries out the central directives, and normal academic and administrative functioning of his campus. Gradual decentralization of administration is occurring. Figure 2 shows the flow chart of the administration of the university.
SETTING THE STAGE
The campuses offer a total of 53 professional courses, mostly at undergraduate and some at graduate levels, with total enrollment over 43,000 as of 2002. These campuses maintain 20 classical libraries of different sizes, of which six were mostly for undergraduate teachings and the remaining were research-oriented libraries. Until 1999, all libraries were functioning manually, as the new information technology of the 21st century did not reach into these libraries. Only a few libraries scarcely had one or two desktop computers, used mostly for writing memos. On the basis of this situation, the author at the Central Coordination Office of the libraries in 1999 introduced the project of modernizing the libraries to the academic vice rector, who authorized introduction and consolidation of an advance information technology to all libraries and laboratories of the university, to guarantee effective information services to all communities of the university by 2002. The new technological developments and their implementation in the country was very slow during the last decades due to the effect of devaluation of local currency. The country did not have fax service 15 years ago, cell phone 10 years ago, and Internet service seven years ago. Now, of course, public cyber cafés, with all facilities of the information technologies, are abundant in every shopping mall of every city of the country. The UDO is very sparsely located in Venezuela (Figure 1) and could not invest enough to establish infrastructure for telecommunication at all campuses, to introduce new information technology to faculties, librarians, students, and administrators. This Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
402
Bashirullah
is partly a function of dramatic decline since 1983 in public funding for higher education due to progressive devaluation of local currency, especially in the construction of new buildings to accommodate new laboratories or libraries, or to acquire new technologies. To meet high demand of local school graduates, the university is currently developing a project to introduce courses on distance learning. This project demands a better platform of telecommunication in the university, as well as better library facilities. The Internet and the World Wide Web represent significant advances for the retrieval and dissemination of scientific and other literature. Using computer networks, people capture information and collaborate in order to generate increasingly sophisticated outputs for higher education. The university population was missing all this new information, as well as access to the new information technology and communication. The objective of this project was to introduce new information technology to all libraries and establish an intranet on each campus by the year 2002 which was achieved during the stipulated time period. The creation of a university digital database was initiated in 2001 and remains in progress. The project of installing the intranet and automation services in the libraries of all campuses of the university was initiated by the author, which led to the creation of a wider Academic Networks Committee of the UDO, presided over by the academic vice rector and composed of coordinators of tele-information, graduate studies, research, and library. The coordinator of tele-information was assigned to coordinate the installation of an intranet in each campus of the university. A fund was created pulling money from each participating department, and more than $500,000 was invested in the project and 80% connectivity was available by mid-2002. In addition, a series of Sun Microsystems servers and 150 Dell workstations were purchased at a cost of about $400,000 to improve the networks and accessibility. This equipment was not installed as of mid-2003, but was expected to be functioning before the end of the same year. An estimated $200,000 — mostly for upgrading the equipment — is required to finish the project by the end of 2003. At press time, however, this money was not yet available due to control of foreign currency introduced early in 2003, as well as over 30% devaluation this year alone and 18% reduction of current budget by the government. There is almost no possibility of getting this money this year to finish the project.
CASE DESCRIPTION
Bush (1945) suggested that individuals would plant links from one piece of knowledge to another, which he called “trails” of information. These trails are the precursor of today’s hypertext and the Web (Lesk,1997). The new information technology had transformed the communication systems into wide area networks (WANs) to transmit information to long-distance areas. These changes were generated in the institutions of higher education to provide these tools for their professors and students. These tools were introduced in American universities during the ’60s, while they were only initiated with limited access in Venezuelan universities during the ’90s. The university without a formal planning attempted to introduce automation service in 1968 with two IBM-1130s on two campuses for both academics and administration, which ended up totally occupied by the administrative services. The academia did not have a chance to get involved seriously, except through occasional analysis of mathCopyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Library Networking of the Universidad de Oriente 403
ematical data. In 1978, Control Data installed a Cyber-171-4 for the central office, three Cyber 1830s in three distant campuses, and two terminal Cyber 18-05s for closer campuses for interconnecting the campuses, which never succeeded due to lack of expertise and services. This equipment ceased to function in 1987. The university in 1990 began to establish LANs (local area networks) on the campus of Cumana and the central administrative building through the initiative of a couple of professors of the recently created Department of Computer Science. The CONICIT (National Council of Science and Technology) in 1994 launched a national project, REACCIUN (National Academic Networks for the Research Institutions and Universities), which presently changed to CNTI (National Council of Tele-Information) to introduce Internet facilities to all universities. The project was financed by the Central Government. The respective university was supposed to develop its intranet, and the UDO signed the agreement to establish the intranet of the university. This initiated the program of networking, and the UDO started to extend cable to different buildings with UTP5 and fiber optics. The university contracted a professor to organize the network, and formed a coordination office known as RAUDO (Academic-Administrative Networks of the UDO). The university signed a service contract with the CANTV (National Company of Telephone of Venezuela) for TDI for networking the five campuses, which later changed to the newly introduced Main Frame Relay, on the recommendation of ALCATEL, with a speed of 256 kbs. This speed was upgraded to 512 kbps in 2000 and was expected to reach 1,024 kbps in 2003. The university rented a satellite from a private company to communicate with the far-away campuses, as the installation of fiber optic cable was very costly. This service was later cancelled due to high recurring expenses. The Main Frame Relay was selected for its newer technology and cheaper service costs. There are 32 networks with 254 IPs in each campus (Figure 3). The following list is a description of the network components: Intergraph Server 320 was selected for availability of services in the country, to work with a Linux operating system; in 2002 Sun Microsystems servers 230 and 250 with Solaris were also selected. The company trained technical personnel to operate on Unix. The Main Frame Relay is connected to different campuses with a router and switches of 3Com and Cisco. Each campus can provide mail and Web services with their own server. Clients at different points at the five campuses are connected, and workstations comprise Dell, Compaq, HP, and clone systems. Earlier the purchasing department purchased clones for a cheaper price, but the system was changed to ensure purchasing only of original equipment for the reliability. Services for e-mail and data are provided by separate servers, and the networks have several POPs, SMTPs, MIMEs, and LDAPs. The networks were functioning without security until a firewall was installed in 2002. The networking of all campuses of the university was essential for better communication, and introduction of automation service in libraries accelerated the development of the intranet in each faculty.
CURRENT CHALLENGES/PROBLEMS
The university started the networks without any technical studies or technical know-how, which led to initial failure. Today, the university has three departments of Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
404
Bashirullah
Figure 3. RED ACADÉMICO-ADMINISTRATIVA DE LA UNIVERSIDAD DE ORIENTE Ciencias de Puerto la Tierra Cito Ordaz (Bol ívar) Ciencias (Anzoátegui) (Bol í var) (Sucre)
Reacciun
Decanato (Anzoátegui)
64 256
64
Esc. Medicina (Bol í var)
64 64 128
256 Rectorado Cumaná
FRAME RELAY
768
Decanato 64 (Bol ívar)
128
128 64
64
64
Juanico (Monagas)
Guatamare (Nva. Esparta) Boca del Rio (Nva. Esparta)
Guayacán (Sucre)
Carúpano (Sucre)
Los Guaritos (Monagas)
information and computer science on three campuses with trained and qualified teaching staff. The technical services are provided by vendor-trained technicians, who are in short supply at the university. This shortage of technicians at different campuses, along with unreliable power supply, produces interruption in the telecommunications. The power supply is an extra-university problem, and extra potent battery and protectors are used to protect the servers. The challenge is to teach enough technical know-how within university staff to provide better services, as well as to assign responsibilities for specific tasks. The university invests money in training the technical staff, but the pay scale is lower than private enterprises. This difference in economic benefits provokes technicians to leave the university. The other challenge is to provide better accessibility to the Internet by introducing wider bandwidth, and at the same time, introducing appropriate management of Internet services by installing proper software. Many users keep the line occupied with nonacademic uses. Control of this abuse may improve the accessibility. Viruses sometimes create very serious problems, and it is a challenging job to “disinfect” the servers and workstations, without immediate purchase of anti-virus software. Reduction of budget and restriction of foreign currency make the services inefficient. The local telephone company (CANTV) provides telephone and Internet services to the university. The Internet connection was increased to 512 kbps in 2001 and was expected to reach 1,024 kbps by mid-2003. The expectation is to increase to 2MB access on each campus and make their access direct to the Internet. This will avoid traffic jams Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Library Networking of the Universidad de Oriente 405
in the central bandwidth. Recently, the government for open competition among telecommunication services. Presently, one cell phone company, a cable TV company, and CANTV offer wider bandwidth service for Internet connectivity, but the services are very costly. The university is looking for a better deal from these service providers. Many professors are gaining wider band connection privately with these companies, and obviously enjoying the fruits of better and faster connectivity. The university community is optimistic about wider band connectivity, as the university is planning to introduce virtual undergraduate courses shortly. The coordinator of the library office is introducing information literacy courses for the university community, as the great majority of the community does not use the Internet facilities and resist changes. The librarians of this university are older graduates not trained in new information technology. The challenge is to expose them to new technology by offering workshops, seminars, and at the same, hiring information-literate technicians. The university hired professionals of new information technology for each library to work in cooperation with the librarian. This is a slow, but challenging process.
ACKNOWLEDGMENTS
The author is grateful to Ing. Pablo Caraballo and Lic. Rommel Contrera for their critical comments on the manuscript.
REFERENCES
Bush, V. (1945). As we may think. Atlantic Monthly, 176(1), 101-108. Caraballo, P. (2001). Red academica de la UDO. La Academia Hoy, (5). Retrieved from http://www.udo.edu.ve/vrac/laacademiahoy Lesk, M. (1997). Practical digital libraries, books, bytes & bucks. San Francisco: Morgan Kaufmann.
Abul K. Bashirullah is a professor and coordinator General of Libraries, Universidad de Oriente, Venezuela. He holds MSc and PhD degrees, and has authored more than 80 scientific publications. He is the recipient of the Order Andres Bello (First Class) by the Presidency of the Republic of Venezuela; PPI, nivel III (Programa de Promoción de Investigadores), Ministry of Science; and PEI (Programa de Estimulo al Investigador), Universidad de Oriente. Dr. Bashirullah is a fellow of the Zoological Society of Bangladesh, and a member of the American Association of Advancement of Science, the Venezuelan Association of Science, the National Association of Directors of Libraries of Higher Education, and the International Federation of Library Association.
This case was previously published in the Annals of Cases on Information Technology, Volume 6/2004, pp. 561-567, © 2004. Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
406 About the Editor
About the Editor
Mehdi Khosrow-Pour, D.B.A, is executive director of the Information Resources Management Association (IRMA) and senior academic technology editor for Idea Group Inc. Previously, he served on the faculty of the Pennsylvania State University as a professor of information systems for 20 years. He has written or edited more than 30 books in information technology management. Dr. KhosrowPour is also editor-in-chief of the Information Resources Management Journal, Journal of Electronic Commerce in Organizations, Journal of Cases on Information Technology, and International Journal of Cases on Electronic Commerce.
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Index 407
Index
A
B
AC drives 166 adaptive design strategy 140 advanced information technology 56 Advanced Medical Priority Despatch System 259 Ahmanson project 3 air cargo 177 air cargo market 182 Air New Zealand 188 AIX4 operating system 337 alternative trading systems 268 Ambulance Employees Australia 259 Ambulance Services Act of 1986 248 AMERIREAL Corporation 100 Ansett Airlines 187 Apple Computer 5 application service provider (ASP) 223 application support group (ASG) 172 Army housing management 137 Association of Research Libraries (ARL) 2 ATLAS/GIS software 140 attribute data 143 Australasian Food Products Co-op 110 Auto Network Exchange (ANX) 292 automation 399 automobile industry 282
Bartlett and Ghoshal framework 118 base closures 145 Bialystok City Hall 388 British Airways 188 BRS search engine 15 budget allocation 35 budgeting and accounting control mechanisms 31 building sectoral decision support centers 242 bulletin board system 269 Bush, George 89 business development 300 business information 181 business investments 154 business model 150 business planning 124 business policy 58 business relationship 149 business structure 59 business transactions 153 business-to-business 151 business-to-business e-commerce 149, 155
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
408 Index
C Cabinet of Egypt 234 CAIB Report 78 call center 346 Caribbean Institute of Technology (CIT) 382 Center for Scholarly Technology (CST) 3 Center for Software Excellence (CSE) 349 central data reporting 222 central government 300 centralized governance structure 197 centralization of PC support 353 CEO 250 Challenger 79 charge-back policy 334 chargeback system 27 chief executive office (CEO) 188 CIO 203 Cisco Connection Online 152, 159 Cisco International 149 Cisco Systems Incorporation 149 citizen (customer) services 389 CNTI (National Council of Tele-Information) 403 co-operative 111 Columbia 77 Columbia Accident Investigation Board (CAIB) 78 commercial off the shelf (COTS) 81 commodity-type products 115 Common Agricultural Policy (CAP) 116 communication infrastructure 289 communications 149 competitive advantage 283 complicated process of project management 298 components 286 computer aided design (CAD) 67, 288 computer aided manufacturing (CAM) 288 computer networks 149 computer numerical control (CNC) 67 computer reservations systems (CRS) 193 computer-aided production planning (CAPP) 68 computing facilities 330 computing services 332, 341
CONICIT (National Council of Science and Technology) 403 consensual cultures 10 consumer market 112 consumer product 113 corporate desktop services 347 corporate IT infrastructure 199 cost of ownership 345 CPM diagram 374 creating new knowledge 286 Criminal History Repository (CHS) 29, 38 CRS 183 cultural differences 189, 381 cultural role 238 culture 10, 34, 48, 81 customer relationship management (CRM) system 221 customer satisfaction 153 customer service 102, 149, 211
D data analysis 221, 234 data and information deficiencies 145 data flow diagrams (DFD) 392 data processing 300, 332 data-supported decision making 221 database management system 143 decentralization process 334 decision support system (DSS) 140, 231, 241 decision support systems for strategic issues 241 decision-making environment 234 decision-making process 234 deficit forecasts 144 demand for computing services 332 demand variability 65 Department of Defense 137 Department of Human Services 249 Department of Public Safety 30, 33 Department of the Army 136 despatch environment 247 developed nations 371 developing countries 231 developmental cultures 10 direct numerical control (DNC) 67
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Index 409
distribution of goods 177 doctoral student database (DSD) 50 drag-and-drop graphics 371 dyad 10
E e-commerce 151, 271 e-commerce business plan 275 e-commerce initiative 271 Eastern Caribbean Investment Development Service 380 econometric 140 econometric model 142 economic conditions 368 economic development 308 economic feasibility 5 economies of scale 285 Egypt 230 Eisenhower, Dwight D. 99 electronic communications networks (ECNs) 268 electronic data interchange (EDI) 68 electronic exchange of data (EDI) 394 electronic information system 181 electronic messages 183 electronic point of sales (EPOS) 68 emergency despatch and communication system 247 energy management systems (EMSs) 165 ENGINECOMP 205 enrollment data 47 equipment variability 65 ERP application 106 ERP systems 204 Estitherm 371 ETC model 127 Ethernet 6, 14 executive information system 140 external suppliers 290 extranet 391
F financial services 285 five FIST principles (see also food information systems and technology) 124 flexibility of production 61
flexibility tactics 67 flexible manufacturing systems (FMS) 67 flow of information 120 food information systems and technology (FIST) (see also five FIST principles) 119 food service market 112 force reductions 145 Ford (see also Model T and Mondeo project) 281 Ford Motor Company 284 Fortune 1,000 companies 220
G Gantt chart 374 gateways 16 geographic information system (GIS) 140 geographically remote 289 GHI 220 global applications project 120 global business strategy 110 global capitalism 293 global companies 282 global development teams 386 global distribution system 127 global image 286 global information system 110 global integration 281, 293 global networked business 151 global networked business model 150 global request for proposal (RFP) 123 global-capable suppliers 287 globalization 122, 282 going global 386 Gold Chip 315 goodwill trust 158 gopher 17 governing body 191 Government Accounting Office (GAO) 137 government agencies 31, 81 government IT manager 31 graphical user interface (GUI) 3, 87, 271 graphics programming 377 gross military deficit 137 group consensus 45
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
410 Index
H hard money 31 HEAT 348 helpdesk 345 heuristic programming models 140 high speed Internet access 381 higher education 45 Highway Patrol (HP) 29, 39 Hong Kong SAR 177 housing analysis decision technology system (HADTS) 136 housing analysis system 140 housing deficit reduction policies 139 housing market area 137 human resource management strategies 238 HyperCard 5, 14
I IBM 337 IDSC’s Pyramid Technology Valley program 243 implementation process 180, 299 importance of organizational and social factors 298 individual personality 308 informal organization 3 Information and Decision Support Center (IDSC) 231 information cultures 45 information flows 178, 181 information infrastructure development 243 information management 45 information processing center 379 information resource management 237 information server 370 information services 380 information sharing 45 information system migration 110 information systems (IS) 187, 288 Information Systems (IS) Department 205, 326 information systems design 237 information systems development 81 information technology (IT) 57, 187, 231, 399
information technology (IT) adoption 241 information technology (IT) skills 371 information technology and communication 402 Information Technology Department 265 information technology diffusion 241 information technology services 379 information-related change 52 ingredients market 112 integrated library system 6 inter-organizational information systems (IOS) 177 inter-organizational-relationships 155 interdepartmental cooperation 70 Internal Health Department 255 internal organization 286 International Air Transport Association (IATA) 177 international information systems (IIS) 110 internationalization 282, 285 Internet 149, 183, 292 Internet service providers 150 investment decisions 98 IS staff 223 IS/IT governance 187 IS/IT performance 191 IT consultant 223 IT cost allocation 32 IT deployment 102 IT division 27 IT investment 102, 345 IT liaison 37 IT manager 195 IT personnel 272 IT project management 101 IT system 388 IT-related business functions 193
J J.D. Edwards OneWorld 347 Java 370 Java applet 376 just-in-time (JIT) 68, 287
K Kendell Airlines 187, 188
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Index 411
key drivers 282 key enabler 290
L labor rates 382 labor shortages 369 LAN (see local area network) leveraging 16 libraries 399 Library Automation Development Unit (LAD) 3 life cycle waterfall model 299 links between technical and non-technical aspects 298 local area network (LAN) 391, 403 Logistics Department 206, 210 long-distance communication 288 LXS Ltd. 368
M main frame relay 399, 403 mainframe-based computing services 334 major Army commands (MACOMs) 140 management and technological development 242 management information system (MIS) 232 manufacturing 112 manufacturing strategy 58 MaPPAR 206 maps 145 market share 284 market sophistication 114 market-clearing 139 marketing center 282 marketing manager 168 Marketing Support Group (MSG) 172 masculine organization 10 Maximus Report 32 mechanical engineering 376 metrics program 346 Metropolitan Ambulance Service (MAS) 247 Microsoft Office 347 Microsoft SQL Server database 374 military preparedness 146 MIS professionals 101
MIS system 203 Model T (see also Ford) 283 Mondeo project (see also Ford) 281 Mosaic™ browser 270 Multi-Technology Automated Reader Card (MARC) 312 multinational corporations 281
N NASA 76 NASDAQ 268 NDPS culture 34 needs assessment 48 NetRez 105 Netscape 272 network implementation 299 network print management system (NPMS) 312 network printers 315 networked computers 288 networked PCs in a local government office 298 networking 149, 399 networking facilities 332 networks 149 Nevada Department of Public Safety (NDPS) 27 NexTrade™ 268 non-technical considerations 330 non-technical roles 237 numerical control (NC) 67 numerically intensive computing service (NICS) 330
O ODBC (open database connectivity) protocols 374 offsetting 137 offshore programming 380 offshore programming services 384 online ordering application 152 online public access catalog 2, 4 open-access computer labs 315 order-to-delivery cycle 113 organization cultures 10 organization theory 58
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
412 Index
organizational decision making 232 organizational performance 98 organizational structure 236 organizing logic 191 Oshkosh Truck Corporation (OTC) 346 outside suppliers 287
P Parole and Probation Division 29 pay-per-print systems 315 PC services 345 PERL 370 Perth Development Agency 299 planned system development process (PSDP) 268 platform standardization 345 point of sale (POS) 312 political considerations 330 power redistribution 45 PowerPoint 86, 226 prepaid direct print design 315 price-setting process 182 process variability 65 product development 369 product support group (PSG) 172 product variability 65 production technology 283 productivity 98 programmable controllers (PLCs) 165 programmers 373 programming 377 project management 299, 373 Project Planning Group (PPG) 8 public administration 230, 388 Public Bodies Review Committee 253 Public Safety Technology Division (PSTD) 29, 30 Purchasing Division 211
Q quote to cash (Q2C) initiative 171 quote to quote (Q2Q) 171
R R&D 282, 285 rational cultures 10
raw material 113 reactive approach 286 Reagan, Ronald 77 Real Estate Investment Trust Tax Provision 99 Real Estate Investment Trusts (REIT) 99 Regional Express (REX) 200 regulation ATS 268 relational database 51 request for proposal (RFP) 336, 347 requirements engineering 204 Reuters 181 RISC technology 334 RISC-based computers 333 robustness 15 Rogers Commission 79 Rogers Report 79 Royal Canadian University (RCU) 330
S SAS system for information delivery 140 Seaboard Stock Exchange 264 Securities and Exchange Commission (SEC) 268 segmented housing market analysis 137 Shared Service Center (SSC) 101 sharing data 222 Singapore Airlines 188 Skills Agency 308 small- to medium-sized organization 155 SmartCard 312 socio-economic development 230 socio-economic variables 141 soft money 31 software development manager 373 soldier morale 146 sourcing network 288 South-Eastern Regional Strategy (SERS) 189 Space Shuttle Program 76 space-capacity principle 181 spatial data 143 specification packaging organizations 385 Sputnik 83 St. Lucia National Development Commission 379
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Index 413
stagnant sales 165 StockScene.com 272 strategic alignment theory 58 strategic management 308 strategic marketing 161 strategic product development 378 stress overload 13 structured design 14 student information system (SIS) 50 student payment system (SPS) 50 supercomputers 336 supply variability 65 support specialist 346 synergies 16 systems designers 180
T tariff 385 tax 385 TCP/IP 292 Technical Affairs Committee (TAC) 37 technical roles 237 Technology Resources Committee (TRC) 312 telecommunications 292, 331, 379 Telnet 16 The Netherlands 178 The World Bank 379 tiering 287 top-down IT governance 199 TQM methods 390 trading partner 154 trading partner relationships 155 trading volume 265 training 380 Traxon Initiative 180 true value trust (TVT) 100
uniform software platform 222 Universidad de Oriente (UDO) 400 University Computing Services (UCS) 3 University of Southern California 2 Unix 16 UNIX services 343 USCInfo 2
V vertical integration 111 Victoria State Government 250
W warehousing 112 Web developer 270 Web Initiative Committee (WIC) 273 Web Marketing Group (WMG) 276 Web Systems Group 369 Web-based application 371 Web-based database 49, 52 Web-based software 368 wide area network (WAN) 152, 402 Windows 1.0 87 Windows 3.0 87 Windsor Inc. 105 workforce variability 65 workstations 336 world car (see also Ford) 282 World Trade Organization (WTO) 116 World Wide Web (WWW) 17 WWW technology 183
X X-consultants 250 X/Open standard 390 xcommands 15
U
Z
U.S. Census Bureau TIGER 142 U.S. macroeconomy 371 U.S. market share 283
Z39.50 (ANSI) 16 ZENworks 351
Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Experience the latest full-text research in the fields of Information Science, Technology & Management
InfoSci-Online InfoSci-Online is available to libraries to help keep students, faculty and researchers up-to-date with the latest research in the ever-growing field of information science, technology, and management. The InfoSci-Online collection includes: Scholarly and scientific book chapters Peer-reviewed journal articles Comprehensive teaching cases Conference proceeding papers All entries have abstracts and citation information The full text of every entry is downloadable in .pdf format
InfoSci-Online features: Easy-to-use 6,000+ full-text entries Aggregated Multi-user access
Some topics covered: Business Management Computer Science Education Technologies Electronic Commerce Environmental IS Healthcare Information Systems Information Systems Library Science Multimedia Information Systems Public Information Systems Social Science and Technologies
“…The theoretical bent of many of the titles covered, and the ease of adding chapters to reading lists, makes it particularly good for institutions with strong information science curricula.” — Issues in Science and Technology Librarianship
To receive your free 30-day trial access subscription contact: Andrew Bundy Email:
[email protected] • Phone: 717/533-8845 x29 Web Address: www.infosci-online.com
A PRODUCT OF Publishers of Idea Group Publishing, Information Science Publishing, CyberTech Publishing, and IRM Press
infosci-online.com
g ucin d o r Int
IGI Teaching Case Collection The new IGI Teaching Case Collection is a full-text database containing hundreds of teaching cases related to the fields of information science, technology, and management. Key Features • Project background information
• Searches by keywords and categories
• Abstracts and citation
information • Full-text copies available for each case • All cases are available in PDF format • Cases are written by IT educators, researchers, and professionals worldwide
View each case in full-text, PDF form. Hundreds of cases provide a real-world edge in information technology classes or research!
The Benefits of the IGI Teaching Case Collection • Frequent updates as new cases are available • Instant access to all full-text articles saves research time • No longer necessary to purchase individual cases
For More Information Visit
www.igi-online.com Recommend to your librarian today! A Product Of
Single Journal Articles and Case Studies Are Now Right at Your Fingertips!
Purchase any single journal article or teaching case for only $18.00! Idea Group Publishing offers an extensive collection of research articles and teaching cases that are available for electronic purchase by visiting www.idea-group.com/articles. You will find over 980 journal articles and over 275 case studies from over 20 journals available for only $18.00. The website also offers a new capability of searching journal articles and case studies by category. To take advantage of this new feature, please use the link above to search within these available categories: Business Process Reengineering Distance Learning Emerging and Innovative Technologies Healthcare Information Resource Management IS/IT Planning IT Management Organization Politics and Culture Systems Planning Telecommunication and Networking Client Server Technology
Data and Database Management E-commerce End User Computing Human Side of IT Internet-Based Technologies IT Education Knowledge Management Software Engineering Tools Decision Support Systems Virtual Offices Strategic Information Systems Design, Implementation
You can now view the table of contents for each journal so it is easier to locate and purchase one specific article from the journal of your choice. Case studies are also available through XanEdu, to start building your perfect coursepack, please visit www.xanedu.com. For more information, contact
[email protected] or 717-533-8845 ext. 10.
www.idea-group.com