Cases on Information Technology Lessons Learned Volume 7
Table of Contents Preface ........................................................................................................................viii Chapter I An Experiential Case Study in IT Project Management Planning: The Petroleum Engineering Economics Evaluation Software Imperative ..............................................1 Charles K. Davis, University of St. Thomas, USA Chapter II The Algos Center: Information Systems in a Small Non-Profit Organization ......... 21 Susan J. Chinn, University of Southern Maine, USA Charlotte A. Pryor, University of Southern Maine, USA John J. Voyer, University of Southern, Maine, USA Chapter III Social Construction of Information Technology Supporting Work ........................... 36 Isabel Ramos, Universidade do Minho, Portugal Daniel M. Berry, University of Waterloo, Canada Chapter IV CRM Systems in German Hospitals: Illustrations of Issues & Trends ..................... 53 Mahesh S. Raisinghani, Texas Woman’s University, USA E-Lin Tan, Purdue University, German International School of Managment & Administration, Germany Jose Antonio Untama, Purdue University, German International School of Managment & Administration, Germany Heidi Weiershaus, Purdue University, German International School of Managment & Administration, Germany Thomas Levermann, Purdue University, German International School of Managment & Administration, Germany Natalie Verdeflor, Purdue University, German International School of Managment & Administration, Germany
Chapter V The Selection of the IT Platform: Enterprise System Implementation in the NZ Health Board ......................................................................................................... 78 Maha Shakir, Zayed University, UAE Dennis Viehland, Massey University, New Zealand Chapter VI Automotive Industry Information Systems: From Mass Production to Build-to-Order ............................................................................................................ 89 Mickey Howard, University of Bath, UK Philip Powell, University of Bath, UK Richard Vidgen, University of Bath, UK Chapter VII Power Conflict, Commitment & the Development of Sales & Marketing IS/IT Infrastructures at Digital Devices, Inc. .................................................................... 103 Tom Butler, University College Cork, Ireland Chapter VIII Development of an Information Kiosk for a Large Transport Company: Lessons Learned ...................................................................................................................... 122 Pieter Blignaut, University of the Free State, South Africa Iann Cruywagen, Interstate Bus Lines (Pty) Ltd., Bloemfontein, South Africa Chapter IX A Case of an IT-Enabled Organizational Change Intervention: The Missing Pieces ........................................................................................................................ 140 Bing Wang, Utah State University, USA David Paper, Utah State University, USA Chapter X Up in Smoke: Rebuilding After an IT Disaster ........................................................ 159 Steven C. Ross, Western Washington University, USA Craig K. Tyran, Western Washington University, USA David J. Auer, Western Washington University, USA Jon M. Junell, Western Washington University, USA Terrell G. Williams, Western Washington University, USA Chapter XI From Principles to Practice: Analyzing a Student Learning Outcomes Assessment System .................................................................................................. 177 Dennis Drinka, University of Alaska Anchorage, USA Kathleen Voge, University of Alaska Anchorage, USA Minnie Yi-Miin Yen, University of Alaska Anchorage, USA
Chapter XII Challenges of Complex Information Technology Projects: The MAC Initiative ...... 196 Teta Stamati, University of Athens, Greece Panagiotis Kanellis, University of Athens, Greece Drakoulis Martakos, University of Athens, Greece Chapter XIII Beyond Knowledge Management: Introducing Learning Management Systems .... 213 Audrey Grace, University College Cork, Ireland Tom Butler, University College Cork, Ireland Chapter XIV A Case of Information Systems Pre-Implementation Failure: Pitfalls of Overlooking the Key Stakeholders’ Interests .......................................................... 231 Christoph Schneider, Washington State University, USA Suprateek Sarker, Washington State University, USA Chapter XV The Columbia Disaster: Culture, Communication, & Change ................................ 251 Ruth Guthrie, California Polytechnic University, Pomona, USA Conrad Shayo, California State University, San Bernardino, USA Chapter XVI New Forms of Collaboration & Information Sharing in Grocery Retailing: The PCSO Pilot at Veropoulos ................................................................................. 272 Katerina Pramatari, Athens University of Economics & Business, Greece Georgios I. Doukidis, Athens University of Economics & Business, Greece Chapter XVII Information Technology in the Practice of Law Enforcement .................................. 287 Susan Rebstock Williams, Georgia Southern Univeristy, USA Cheryl Aasheim, Georgia Southern University, USA Chapter XVIII End-User System Development: Lessons from a Case Study of IT Usage in an Engineering Organization ........................................................................................ 309 Murray E. Jennex, San Diego State University, USA Chapter XIX LIBNET: A Case Study in Information Ownership & Tariff Incentives in a Collaborative Library Database ................................................................................ 324 A.S.C. Hooper, Victoria University of Wellington, New Zealand Chapter XX Information System for a Volunteer Center: System Design for Not-for-Profit Organizations with Limited Resources .................................................................... 345 Suresh Chalasani, Univeristy of Wisconsin, Parkside, USA Dirk Baldwin, University of Wisconsin, Parkside, USA Jayavel Souderpandian, University of Wisconsin, Parkside, USA
Chapter XXI Siemens: Expanding the Knowledge Management System ShareNet to Research & Development .......................................................................................................... 370 Hauke Heier, European Business School, Germany Hans P. Borgman, Universiteit Leiden, The Netherlands Andreas Manuth, Siemens Information & Communication Networks, Germany Chapter XXII Enterprise System Development in Higher Education .............................................. 388 Bongsug Chae, Kansas State University, USA Marshall Scott Poole, Texas A&M University, USA Chapter XXIII ERP Implementation for Production Planning at EA Cakes Ltd. ............................. 407 Victor Portougal, The University of Auckland, New Zealand Chapter XXIV MACROS: Case Study of Knowledge Sharing System Development within New York State Government Agencies ............................................................................ 419 Jing Zhang, Clark University, USA Theresa A. Pardo, University at Albany, SUNY, USA Joseph Sarkis, Clark University , USA Chapter XXV Adoption & Implementation of IT in Developing Nations: Experiences from Two Public Sector Enterprises in India ....................................................................................... 440 Monideepa Tarafdar, University of Toledo, USA Sanjiv D. Vaidya, Indian Institute of Management, Calcutta, India Chapter XXVI IT-Business Strategic Alignment Maturity: A Case Study ..................................... 465 Deb Sledgianowski, Hofstra University, USA Jerry Luftman, Stevens Institute of Technology, USA Chapter XXVII Experiences from Using the CORAS Methodology to Analyze a Web Application ........................................................................................................ 483 Folker den Braber, Norway Arne Bjørn Mildal, NetCom, Norway Jone Nes, NetCom, Norway Ketil Stølen, SINTEF, Norway Fredrik Vraalsen, SINTEF, Norway ChapterXXVIII Infosys Technologies Limited: Unleashing CIMBA ................................................. 502 Debabroto Chatterjee, The University of Georgia, USA Rick Watson, The University of Georgia, USA
Chapter XXIX Development of KABISA: A Computer-Based Training Program for Clinical Diagnosis in Developing Countries .......................................................................... 518 Jef Van den Ende, Institute of Tropical Medicine, Belgium Stefano Laganà, Ospedale Sacro Cuore at Netrar, Italy Koenraad Blot, Pfizer Canada Inc., Canada Zeno Bisoffi, Sacro Cuore Hospital at Negrar, Italy Erwin Van den Enden, Institute of Tropical Medicine, Belgium Louis Kermeulen, Institute of Tropical Medicine, Belgium Luc Kestens, University of Antwerp, Belgium Chapter XXX Cross-Cultural Implementation of Information System ........................................... 527 Wai K. Law, University of Guam, Guam Karri Perez, University of Guam, Guam Chapter XXXI Change Management of People & Technology in an ERP Implementation .............. 537 Helen M. Edwards, University of Sunderland, UK Lynne P. Humphries, University of Sunderland, UK About the Authors ..................................................................................................... 554 Index ........................................................................................................................ 569
Insight into successful IT implementation assists researchers and professionals in determining factors of success, as well as causes of failure, in information technology applications and projects. Case studies have proven to be excellent sources of lessons learned for information technology users and designers. Within the 31 real-life case studies included in this book, Cases on Information Technology, Lessons Learned, Volume 7, many IT researchers and professionals from around the world have written of their valuable, real-life IT experiences in IT utilization and management in modern organizations and universities. Each case includes integral information regarding organizations working with IT, including key individuals involved, intelligent steps taken or perhaps overlooked, and the final project outcomes. These cases cover a variety of IT initiatives, including enterprise systems, wireless technologies, rebuilding operating systems after destruction, and implementation within non-profit organizations. IT managers and researchers will find this volume useful as it describes various scenarios of IT implementation and also unfortunate downfalls. Using the real-life situations as facilitators for classroom discussion, professors and students will benefit as well from this collection of cases. Following are summaries of the cases contained in this volume. The first case study, “An Experiential Case Study in IT Project Management Planning: The Petroleum Engineering Economics Evaluation Software Imperative, by Charles K. Davis, describes an organization’s need for a complex software package internal development. This operational and functional situation provides a framework for developing a set of project plans for a software development project to address these needs. The goal of this case is to develop a detailed project plan, including schedules, staffing, and deliverables by task, and cost estimates. The second case study, “The Algos Center: Information Systems in a Small NonProfit Organization”, by Susan J. Chinn, Charlotte A. Pryor, and John J. Voyer, describes an analysis of information systems conducted for a small non-profit organization. The case highlights many of the problems facing small non-profits and allows readers to supply possible courses of action. In addition, it provides an opportunity to evaluate how a consulting experience was handled. “Social Construction of Information Technology Supporting Work” by Isabel Ramos and Daniel M. Berry describes the social dynamics shaping the implementation and deployment of the MIS systems supporting a company’s production processes
and the procurement of resources for these processes. The case study describes in particular the stakeholders and the organizational relationships between them; the stakeholders’ formal and informal roles; the information needed from and provided to the systems, the stakeholders’ views of the systems’ usefulness, quality, and flexibility; the stakeholders’ career paths; and the stakeholders’ perceptions of their own and others’ worth to the company. German public hospitals face governmental and regulatory pressures to implement efficiency and effectiveness metrics, such as the classification of Diagnosis Related Groups (DRG) system, by the year 2005. “CRM Systems in German Hospitals: Illustrations of Issues & Trends” by Mahesh S. Raisinghani, E-Lin Tan, Jose Antonio Untama, Heidi Weiershaus, Thomas Levermann, and Natalie Verdeflor describes customer relationship management technology and the challenges of data sharing and data security in hospitals. Finally, the benefits accruing to the hospitals are identified, along with strategies to focus on efficiency and customer satisfaction in a very competitive market. Discussing the challenges facing the Health Board enterprise system implementation team in dealing with a necessary IT platform change before the system go-live, “The Selection of the IT Platform: Enterprise System Implementation in the NZ Health Board” by Maha Shakir and Dennis Viehland presents issues pertaining to the initial choice of IT platform, the failure of the platform to meet contractual specifications, and the challenges the project team faced in resolving this problem. Building cars to customer order has been the goal of vehicle manufacturers since the birth of mass production. Despite recent advances in information technology offering total visibility and realtime information flow, transforming a high volume manufacturing industry to adopt customer responsiveness and build-to-order represents a significant step. “Automotive Industry Information Systems: From Mass Production to Build-to-Order” by Mickey Howard, Philip Powell, and Richard Vidgen explores the barriers to change within and between stakeholders at all levels of the supply chain. Exploring the political relationships, power asymmetries, and conflicts surrounding the development, deployment, and governance of IT-enabled sales and marketing information systems (IS) at Digital Devices Inc. allows this case to provide rare insights into the reality of IS development and IT infrastructure deployment. “Power Conflict, Commitment, & the Development of Sales & Marketing IS/IT Infrastructures at Digital Devices, Inc.” by Tom Butler focuses on an in-depth description of the positive and negative influences on these processes and their outcomes. “Development of an Information Kiosk for a Large Transport Company: Lessons Learned” by Pieter Blignaut and Iann Cruywagen dissucusses the devlopment of an information kiosk system for a large public transport company to provide African commuters with limited educational background with up-to-date information on schedules and ticket prices in a graphically attractive way. The challenges regarding liaison with passengers are highlighted and the use of a touchscreen kiosk to supplement current liaison media is justified. “A Case of an IT-Enabled Organizational Change Intervention: The Missing Pieces” by Bing Wang and David Paper documents an organizational change intervention concerning the implementation of a novel information technology (IT) at a universityowned research institute. It describes disparate experiences by key actors toward the intervention, marking a mismatch between a new paradigm and the existing IT culture. In particular, resistance from in-house IT specialists was observed as the strongest force obstructing the novel IT implementation.
A college of business computer server room was completely destroyed by a fire. “Up in Smoke: Rebuilding After an IT Disaster” by Steven C. Ross, Craig K. Tyran, David J. Auer, Jon M. Junell, and Terrell G. Williams discusses the issues the college faced as they planned for rebuilding their information technology operations. The reader is challenged to learn from this experience and develop an IT architecture that will meet operational requirements and take into account the potential threats to the system. “From Principles to Practice: Analyzing a Student Learning Outcomes Assessment System” by Dennis Drinka, Kathleen Voge, and Minnie Yi-Miin Yen describes the challenges facing a system project manager. The system supports the assessment of student learning and documents and tracks course content documents. In order to move into the design phase, the business objectives must be identified, a system development life cycle approach and development platform selected, project risks identified and assessed, and resources secured. “Challenges of Complex Information Technology Projects: The MAC Initiative” byTeta Stamati, Panagiotis Kanellis, and Drakoulis Martakos provides a detailed account of the ill-fated Management and Administrative Computing (MAC) that aimed by the homogenization of requirements to centrally procure an integrated applications suite for a number of British higher education institutions. In this context the case illustrates the level of complexity that unpredictable change could bring to an information technology project that seeks to realize the impossible dream of the ‘organizationally generic’ and the destabilizing effects it may have on the network of the project’s stakeholders jeopardizing the project’s success in an irreversible manner. Many world-class organizations are now employing a new breed of systems known as Learning Management Systems (LMS) to foster and manage learning within their organizations. “Beyond Knowledge Management: Introducing Learning Management Systems” by Audrey Grace and Tom Butler reports on the deployment of an LMS by a major U.S. multinational and proposes a framework for understanding learning in organizations, which highlights the roles that LMS can play in today’s knowledge-intensive organizations. “A Case of Information Systems Pre-Implementation Failure: Pitfalls of Overlooking the Key Stakeholders’ Interests” by Christoph Schneider and Suprateek Sarker focuses on the software vendor selection process at a large public University which failed even before implementation could get under way, as the managers in charge of the project overlooked procedures as outlined in the RFP and the roles of relevant but “hidden” decision makers during the pre-implementation stage of the project. The National Aeronautics and Space Administration has had unprecedented success with historic space missions, including the development of the Space Shuttle. When the Challenger exploded, investigations called for NASA to examine its culture. Twenty years later, with the Columbia shuttle disaster, some of the same questions are being asked. “The Columbia Disaster: Culture, Communication, & Change” by Ruth Guthrie and Conrad Shayo discusses the problem of relaying complex engineering information to management, in an environment driven by schedule and budget pressure. “New Forms of Collaboration & Information Sharing in Grocery Retailing: The PCSO Pilot at Veropoulos” by Katerina Pramatari and Georgios I. Doukidis describes a pilot implementation utilizing Internet technology in order to enable collaboration and daily sharing of POS data between grocery retail stores and suppliers with the objective
to streamline the store replenishment process. This effort resulted in significant business results but at the same time several pitfalls and technical challenges, as described in the case. Describing the development and implementation of a knowledge-based policing system, “Information Technology in the Practice of Law Enforcement” by Susan Rebstock Williams and Cheryl Aasheim, discusses how the system enables information regarding incident reports, arrests and investigations to be collected, distributed and managed in a paperless, wireless environment. The challenges faced in merging wireless, wired, database, and application technologies while satisfying the user requirements of the police department are detailed in this report. “End-User System Development: Lessons from a Case Study of IT Usage in an Engineering Organization” by Murray E. Jennex looks at a study of end user computing within the engineering organizations of an electric utility undergoing deregulation. The case was initiated when management perceived that too much engineering time was spent doing IS functions. The case found that there was significant effort being expended on system development, support, and ad hoc use. Several issues were identified affecting system development including use of programming standards, documentation, infrastructure integration, and system support. The value of information depends on the quality of that information, when it is required and the purpose to which it will be put. “LIBNET: A Case Study in Information Ownership & Tariff Incentives in a Collaborative Library Database” by A.S.C. Hooper investigates the principles driving pricing issues and information ownership in a cooperative database. The perspective of selected members provides insights into the strategic outcomes for the organization. “Information System for a Volunteer Center: System Design for Not-For-Profit Organizations with Limited Resources” by Suresh Chalasani, Dirk Baldwin, and Jayavel Souderpandian focuses on the development of information systems for the Volunteer Center of Racine (VCR). This case targets the analysis and design phase of the project using the Unified Modeling Language (UML) methodology. It also discusses project management, team dynamics, projects risks and solution-alternatives in detail. The issues surrounding the pending expansion of Siemens’ community-based knowledge management system ShareNet to the research and development function are described in “Siemens: Expanding the Knowledge Management System ShareNet to Research & Development” by Hauke Heier, Hans P. Borgman, and Andreas Manuth. Information systems implementation issues, as well as change management interventions, are discussed. Particular emphasis is placed on motivation factors for end users, user champions, and top management. “Enterprise System Development in Higher Education” by Bongsug Chae and Marshall Scott Poole study describes a major U.S. university system’s experience in the development of an in-house enterprise system. This case indicates that ES design and implementation in higher education is challenging and complex due to unique factors in the public sector. It offers some unique opportunities to discuss issues, challenges and potential solutions for the deployment of ES in the public arena, particularly in higher education. Changing company’s sales and production strategy from “make-to-order” to “make-to-stock” required a complete redesign of the planning system, which was an integral part of an ERP system based on SAP software. “ERP Implementation for Production Planning at EA Cakes Ltd.” by Victor Portougal describes the organization and
its management practices and specific problems solved by the consulting team. Finally, it identifies the enhancements management obtained as a result of the implementation of ERP. “MACROS: Case Study of Knowledge Sharing System Development within New York State Government Agencies” by Jing Zhang, Theresa A. Pardo, and Joseph Sarkis reports the development of a knowledge sharing system that fosters knowledge sharing across divisions and levels of government in a New York State agency. It describes in detail the project management tools and models used in various stages to aid the analysis and the development of this project. Finally, ongoing challenges and barriers are outlined. “Adoption & Implementation of IT in Developing Nations: Experiences from Two Public Sector Enterprises in India” by Monideepa Tarafdar and Sanjiv D. Vaidya describes IT adoption issues at two large public sector organizations in India. In addition to illustrating the significance of top management drive and end-user buy in, it particularly highlights the role of middle management in managing the IT adoption process at different levels in these large organizations. “IT-Business Strategic Alignment Maturity: A Case Study” by Deb Sledgianowski and Jerry Luftman describes the use of an assessment tool that can help to promote long-term IT-business strategic alignment. The Strategic Alignment Maturity (SAM) assessment is used as a framework to demonstrate the improvement of an international specialty chemicals manufacturer’s IT-business alignment practices to achieve their corporate goals. Major insights from their experience and SAM best practices are highlighted. “Experiences from Using the CORAS Methodology to Analyze a Web Application” by Folker den Braber, Arne Bjørn Mildal, Jone Nes, Ketil Stølen, and Fredrik Vraalsen. describes the process and results of a model based risk analysis carried out on a web application for customers of a mobile phone company. UML inspired diagrams and models were used for both specification of the input to the analysis as well as to express the results. The main diagrams and models used are explained. Infosys Technologies Ltd. implemented a customer relationship management (CRM) system called CIMBA — Customer Information Management By All. “Infosys Technologies Limited: Unleashing CIMBA” by Debabroto Chatterjee and Rick Watson provides insights into the factors that triggered the need for developing such an integrated CRM solution and how the company went about developing and launching this system. It also brings to light the various challenges associated with the implementation of a CRM system. In the next case, “Development of KABISA: A Computer-Based Training Program for Clinical Diagnosis in Developing Countries” by Jef Van den Ende, Stefano Laganà, ,Koenraad Blot, Zeno Bisoffi, Erwin Van de Enden, Louis Vermeulen, and Luc Kestens, the built-in tutor follows the student’s input with complex logical algorithms and mathematical computations, gives comments and support, and accepts the final diagnosis if sufficient evidence has been built up. Several problems arose with the development. In the first place, the evolution in the teaching of clinical logic is always ahead of the program, so regular updating of the computer logic is necessary. Secondly, the choice of MS Access as computer language has provoked problems of stability, especially the installation of an MS Access runtime. Thirdly, and most importantly, scholars want proof of the added value of computer programs over classical teaching. Moreover, the concept of a pedagogical “game” is often regarded as childish. Finally, the planning
and financing of an “open-ended” pedagogical project is questioned by deciders, as is the case with all operational research. In “Cross-Cultural Implementation of Information System” by Wai K. Law and Karri Perez an international service conglomerate recently developed a strategic information system to enhance its service delivery and strategic adaptation. A routine implementation of an information subsystem at a newly acquired subsidiary ended with shocking failure. Cultural ignorance doomed the information system project delivered by a seasoned system development team. PowerIT, an engineering company of about 200 staff, adopted an enterprise resource planning (ERP) system. After eighteen months the performance of the system was under scrutiny. The resultant investigation identified problems with the acquisition and implementation process. “Change Management of People & Technology in an ERP Implementation” by Helen M. Edwards and Lynne P. Humphries highlights the difficulties encountered in tailoring the enterprise resource planning system to the existing business practices. Every situation involving information technology varies greatly from the next, providing IT managers and students alike with a wide expanse of successful IT implementation or IT catastrophe examples. It is our hope that the cases included in this volume of the Annals of Cases on Information Technology, Volume VII, will assist IT researchers, professionals, policy makers, teachers, and students in their own IT adoption situations and studies. Your feedback and comments, as always, will be greatly appreciated. Mehdi Khosrow-Pour, D.B.A. Editor-in-Chief Cases on Information Technology, Lessons Learned, Volume 7
An Experiential Case Study in IT Project Management Planning 1
An Experiential Case Study in IT Project Management Planning: The Petroleum Engineering Economics Evaluation Software Imperative Charles K. Davis, University of St. Thomas, USA
An Experiential Case Study in IT Project Management Planning 3
wells. Or, if one company wanted to sell an oil field to another, then both would need a firm like Hopkins to determine a fair price for the property. With offices in Houston, Denver, Calgary, and Tulsa, Hopkins & Associates is actually a small firm of less than 1,000 people, about half engineers, but this is the norm for oil and gas consulting firms of this type. In fact, Hopkins is at this time the largest petroleum consulting company in the world and is very highly regarded for its integrity and the quality of its engineering work. The firm was founded by old Mr. Hopkins in the 1930s. As a young man with a newly minted engineering degree from the University of Tulsa, he participated in the founding of “petroleum engineering” as a professional discipline. In fact, some say he invented the idea of using engineering as a tool for understanding petroleum reservoirs. A man of great integrity and honesty, he was for several decades a central figure in the oil industry in Oklahoma, Texas, Venezuela, and the Middle East. In addition to the consulting arm of his company, he also owned several producing oil fields and a small, but highly profitable gas pipeline. He also owned a drilling company that did exploration, wildcatting mostly in Texas.
supervisor manages the daily activities of a couple dozen engineers. Jack is one of the “old timers” who sees computers as a necessary evil and longs for the good old days when the slide rule was king. Still, he is adaptable and a very capable manager based out of the Tulsa office. Em was a college roommate of Hop at the University of Oklahoma. Em loves to tell the story, “When me and Hop were sophomores, we got drunk and somehow Hop drove his Cadillac convertible into the middle of the OU football stadium smashing things as he went. It was 3:00 AM on a cool, starry night in October. We lounged in the back seat drinking Wild Turkey out of the bottle until it was gone while talking about living an exciting life prospecting for oil. Then we staggered back to our dorm arm-inarm and went to sleep. Unfortunately, Hop left his car in the middle of the stadium! We were expelled by the end of the next week!” (Upsetting the football establishment at OU is unwise. A former president of OU was once quoted as saying, “We try to maintain a university here that our football team can be proud of!”) That is how Hoppy became a proud graduate of the University of Texas. “Hook’em horns!”
The rapid analysis and use of information is a key innovation (Eppinger, 2001). Therefore, the engineers at Hopkins use a lot of engineering and geology software tools to model and analyze the characteristics of oil and gas reservoirs. They also use financial analysis software tools to forecast the economics for each individual oil or gas well being evaluated using discounted cash flow analysis. Because of the importance of computers in the core work of the firm, the Board of Directors established a separate company, a wholly owned subsidiary of the engineering company called Hopkins Computer Services, Inc. (or HCSI for short), to better provide the computer services needed by the firm. The computer company operates a large distributed network with a modern computer center in Houston and numerous interconnected engineering workstations at the various Hopkins offices. HCSI employs nearly a hundred people (analysts, programmers, computer operators, network specialists, managers, and support staff). The President of HCSI is a brilliant young petroleum engineer from Louisiana named Ken Summers. Ken was an All American golfer at LSU, which is a good skill to have in the golf crazy Hopkins companies. He has Hop’s trust and complete support. Ken is a good manager and an oil man through and through, but he knows little about computers or software development. Nevertheless, he has a vision. He wants to automate the entire process of generating reservoir economics for an oil field. He calls it the Petroleum Reserves Economics System, or PREsys. Let’s look at what this might mean. Essentially, Ken is talking about building several generalized computer models and linking them together into a monster integrated decision support system for petroleum engineers (Arsanjani, 2002). Ken recently met with Hop, Larry, and the rest of the Board and summarized his ideas: “Y’all, I think we need to link together three fancy computer models into one overall model to do this. Some of you know most of these ideas; some of you know some of them; and some of you are less familiar with the economics evaluation process. So, I’ll review the steps involved in doing an evaluation in terms of these three sets of activities.
The FIRST MODEL would be that for an individual reservoir under an oil field. This model would include information about the dimensions of the reservoir (from seismic data and the like). The idea is to develop a three-dimensional map of the reservoir, its geology and its chemistry. The model would include information about the porosity of the rock making up the reservoir and the type of capstone rock that overlays it. We also need to know how many wells have been punched through the capstone into the reservoir. Also, important would be information about the chemical composition of the oil in the reservoir and the viscosity of that oil as well as temperature and pressure readings associated with the wells. Other parameters would include information such as the presence of other fluids in the reservoir (such as salt water) and the amount of pressure exerted by any pumping at the wellheads. By using the Monte Carlo simulation technique for mathematical modeling and some sophisticated reservoir engineering analysis with these kinds of parameters, a model can be constructed to mimic the behavior of the well as oil & gas migrate through the reservoir to the perforations in the well casing and up to the surface. This kind of simulation should be able to predict the flow of petroleum products from the reservoir over time and it should be able to tell us when the well will cease to produce. Production at the wellhead is generally tracked and reported on a monthly basis and the simulation would generate monthly production estimates. This is the first model and it is the only part of this that actually requires complex engineering and geology. The SECOND MODEL deals with forecasting economics for each individual well. By using discounted cash flow analysis, it is possible to estimate how much money a producing well will generate for its owners over the lifetime of that well. This analysis is generally done quarterly. The idea is to construct a spreadsheet to net the negative cash flows from the positive ones quarter by quarter over the life of an oil or gas well until its reserves are fully depleted. [The oil industry gets a special tax break based upon the fact that petroleum reserves are depleted over time. It is called the ‘oil & gas depletion allowance’ and is a minimum of 15% of gross income from production activities. The rationale for this tax break is that it helps the oil industry to drill for more oil, which is critical as existing wells are depleted over time. There has been some discussion of including a tax modeling component in PREsys, as well.] Generally, an oil company will lose money early when the costs of drilling are incurred and begin to make money later as the steady income of a producing well overcomes the earlier high expenses. To begin analyzing cash flows for a specific well, we need two basic sets of numbers: •
The amount of crude oil and/or natural gas that the well will produce by quarter over its lifetime. Initially, this will be a high number gradually decreasing over time as the pressure in the well is dissipated in the reservoir below by the ongoing production. This is the set of estimates that come from the first model. The price of crude oil and/or natural gas that will be in effect from quarter to quarter as the oil is brought to market. The product of the amount of petroleum produced in a given quarter and the price at that time is the amount of money
and forecast revenues and profits automatically by well or by oil field for his reservoir engineers! What could be better?!!” Larry offered a word of caution. He had read that big software projects are risky and too often fail (De Meyer, Loch, & Pich, 2002; Keil & Montealegre, 2000; Matta & Ashkenas, 2003). He noted that “the credibility of Hopkins & Associates in the marketplace could be eroded badly if the PREsys computer software was not absolutely accurate and reliable”. Ken agreed, saying that he believed that accurate and reliable software could be created; and, if it could, it would provide a great competitive advantage for the firm. Tommy Smith, who was also at the Board meeting for a presentation, was excited. “You know,” he said, “oil wells change hands all the time. And sometimes, a client will use the same wells as collateral several times. We have folks coming to us to re-evaluate the same wells that we evaluated just a few years ago. We can just dust off the old evaluations, update them and resell them. That is really the most profitable part of our business! If we could electronically file and organize completed evaluations, indexing and cataloging them on the computer, we could make the process of updating and reselling old evaluations a lot easier to manage. It would also help us make sure that our evaluations of wells stayed consistent!” Hop was wide-eyed. He could see how this kind of system could really streamline the task of doing oil and gas reservoir economics analysis and increase his profit margins all across that part of the business. He ended the meeting on his way to give another interview to The Oil Journal by asking, “When can I have it and what will it cost?”
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
An Experiential Case Study in IT Project Management Planning 9
a natural leader and had picked up a lot about reservoir engineering in her 18 years with Hopkins, but not being an engineer herself (or even degreed), she would have a hard time maintaining credibility with the engineering staff during requirements analysis (Howard, 2001). She may not even understand what they are talking about with some of the key down-hole engineering simulation formulas and mathematics that would govern the migration of petroleum through a reservoir. Another of Ken’s trusted lieutenants is Jerry Bye, who runs the applications development and maintenance functions for the computer company. Jerry is a great guy from Indianapolis, a fanatic fan of NASCAR and the Indianapolis 500, which he attended every year growing up as a kid. He is well schooled and experienced in software development (Kezsborn & Edward, 2001; Marchewka, 2003), expert in procedures for project cost estimation (Armour, 2002), but he too is not an engineer. Therefore, Ken is even hesitant about Jerry. He is sure that the PREsys development project will be expensive and will consume a lot of engineering time around the firm. In a company owned and operated by engineers, Ken is hesitant to put a non-engineer in charge of such a high visibility project. Hop will probably let him do it, but if the project turned out badly, it would be a politically difficult decision to justify to the rest of the engineers in the firm. There would be a lot of second-guessing, and Ken finally decides he is just not interested in taking that risk. Ken is also concerned that whoever he chooses to plan the project should be in a position to become Project Manager for the project once it is approved and funded by Hop and the Board. HCSI is a small company and all of his most capable people are critical (even indispensable) to the ongoing operations of the HCSI in their current roles. Ken reluctantly decides that no one on his current staff can take on this mission-critical project for Hopkins & Associates (Hayashi, 2004; Klein, Jiang, & Tesch, 2002). Fortunately, Ken has an “ace in the hole”. He has a friend from his days working for Shell Oil prior to his joining Hopkins. His friend, David M. Gardner, was educated as a petroleum engineer at SMU. He had been a crackerjack programmer at Shell for 10 years, where he learned to develop state-of-the-art software following generally accepted industry standards (Rada & Craparo, 2000). Gardner is now a computer consultant to the petroleum industry with his own firm, David M. Gardner & Associates, Inc., based out of Dallas. Ken calls David, who is on the next plane to Houston. David is shrewd. He agrees to develop the plan and serve as Project Manager for the PREsys project if and when it is approved by Hop and the Board. He agrees to do all of this for minimal pay provided that David M. Gardner & Associates shares equally in the rights to the PREsys system and with the stipulation that The Hopkins Companies and its subsidiaries would be barred from selling the PREsys software itself. Hopkins can utilize PREsys internally to support its reservoir engineering consulting business, but PREsys will be a software product of the tiny, struggling Gardner company. In addition, David requests a royalty of $7 per case from Hopkins for every well evaluation done by Hopkins and Associates that makes use of PREsys. Ken and Hop discuss this arrangement. Ken confides that he has great confidence in David even though his company is small and has few resources outside of David’s programming expertise. Hop, who really wants the PREsys software, agrees to the deal. The Board rubberstamps the contract and provides the initial funding. David rents an
apartment in the same block as the Hopkins Building in Greenbrier Plaza and moves to Houston for the duration. Ken and David meet at Ken’s office to begin planning the project. “Where do we start?” Ken asks David, smiling coyly but also deadly serious. The current problems facing this organization are clear. Oil and gas consulting is a highly competitive business. If H&A were to become technically obsolete, it would certainly lose its competitive edge. PREsys is a strategic, potentially mission-critical system. Hop, Larry, Ken, and David all understand this reality. Senior management sees a fundamental threat to the business if this technology is not developed soon. For them, there is a competitive imperative driving the demand for this system. So, whether it is realistic or not, Hop and the rest of senior management are pushing hard to get this new system developed and online in record time. However, software development is not really the strong suit of any of The Hopkins Companies. None of the executive management has engaged in such a project before, and the systems staff is inexperienced as well. It is questionable whether they have the collective expertise to oversee the development of PREsys. There is a lot riding on David’s expertise, his judgment and ethics, and his very small company. There is potentially a lot of risk associated with this project configured like this. The challenge is to develop a realistic project plan that will help mitigate the risk and give the developers a reasonable chance of developing a first-rate system that will meet the needs identified while standing up to the time pressures effectively. This is a classic problem, one that is common today, and one that will always be with the information systems management community. And the problem is this: How to balance the demand for rapid delivery of final products by executives with great organizational power against the critical need to make sure that everything is properly analyzed, designed, constructed, tested and implemented. There are two kinds of serious risks here, each diametrically opposed to the other. The first risk is that the system is not developed in time and The Hopkins Companies loses competitive position in the oil & gas consulting business. The second risk is that the company will rush to market with the PREsys software too soon, thinking that it is excellent, only to find out too late that there are serious flaws in the software that lead to inaccurate reserves economics forecasts and, eventually, major law suits. Fundamentally, the project planning for the PREsys software must balance these two risks in the face of real technical complexity and overly optimistic executive time pressures.
Armour, P. (2002). Ten unmyths of project estimation. Communications of the ACM, 45(11), 15-19. Arsanjani, A. (2002). Developing and integrating enterprise components and services. Communications of the ACM, 45(10), 30-34. De Meyer, A., Loch, C.H., & Pich, M.T. (2002). Managing project uncertainty: From variation to chaos. MIT Sloan Management Review, 43(2), 60-67. Eppinger, S.D. (2001). Innovation at the speed of information. Harvard Business Review, 79(1), 149-158.
An Experiential Case Study in IT Project Management Planning 11
Gido, J., & Clements, J.P. (2003). Successful Project Management (2nd ed.). Mason, OH: South-Western, Thompson Learning. Hayashi, A.M. (2004). Building better teams. MIT Sloan Management Review, 45(2), 5. Howard, A. (2001). Software engineering project management. Communications of the ACM, 44(5), 23-25. Keil, M., & Montealegre, R. (2000). Cutting your losses: Extricating your organization when a big project goes awry. MIT Sloan Management Review, 41(3), 55-69. Kerzner, H. (2003). Project Management: A Systems Approach to Planning, Scheduling, and Controlling (8th ed.). New York: John Wiley & Sons. Kezsborn, D.S., & Edward, K.A. (2001). The New Dynamic Project Management (2nd ed.). New York: John Wiley & Sons. Klein, G., Jiang, J., & Tesch, D. (2002). Wanted: Project teams with a blend of IS professional orientations. Communications of the ACM, 45(6), 81-86. Marchewka, J.T. (2003). Information Technology Project Management—Providing Measurable Organizational Value. Danvers, MA: John Wiley & Sons. Matta, N.E., & Ashkenas, R.N. (2003). Why good projects fail anyway. Harvard Business Review, 81(9), 109-205. Rada, R., & Craparo, J. (2000). Standardizing software projects. Communications of the ACM, 43(12), 21-24. Schwalbe, K. (2002). Information Technology Project Management (2nd ed.). Boston, MA: Course Technology, Thompson Learning.
APPENDIX Role Playing For Interviews
Divide the class into five teams of interviewers to gather information about requirements. Each person in the class is either a role player or a lead interviewer for a team, and each person is on perhaps two interview teams. The following five individuals will be interviewed by one of the five teams of analysts. This exercise is done in an effort to begin the process of developing a project plan for systems development effort in this case and to officially kick off the project:
1. 2. 3. 4. 5.
Hop Hopkins, President of Hopkins & Associates Rocky Ridge, Manager of Geophysical Analysis Ken Summers, President of HCSI Jerry Bye, Director of Applications Systems Development Mary Nunn, Manager of Technical & User Support
Sample questions for each interview are listed next. Please be sure that you get the information presented next, at least, plus any other information that you feel is relevant as well. •
Sample Interview Questions for Hop Hopkins Tell me a bit about Hopkins & Associates and how you see the company in its marketplace.
What do you hope to get out of this project short term and long term? What do you think would be a reasonable timeframe for this effort? What do you think are the major risks with this project? Sample Interview Questions for Rocky Ridge What is involved in reservoir engineering modeling? Do you have time to serve to help with the modeling? How long will it take to build a generalized reservoir software model?
Sample Interview Questions for Ken Summers What is your vision for the system? Why do you want to bring in someone from outside to lead the development work? How do you expect to forecast the price of oil and gas in the future?
• • • •
Sample Interview Questions for Jerry Bye How do you see this system returning value to H&A? What kind of staffing for this project can H&A provide? What systems development tools do you support? What platforms do you think that this system should run on?
• • •
Sample Interview Questions for Mary Nunn How is your staff organized and where are they located? How would you assess the computing capabilities of the engineers? What additional requirements would you need to support this new system?
Now, the following are facts and information about the persons to be interviewed. Each role player is given the fact sheet that pertains to his/her character. The fact sheets help to establish how each player is to play each part and include attitudes and facts that are to come out in the interviews. Only the individual playing each part should be given the fact sheet for that part.
About: Hop Hopkins Assorted Facts H&A generated $382 million in revenues last year. Hop’s mother is descended from an original western Pennsylvania oil family and views H&A as important because it bears the Hopkins name, but really as “small potatoes”. Hop is seldom at the office, which is huge with red silk-covered walls and Louis XIV furniture covered with gold-leaf, reminding one of certain houses of ill repute in New Orleans. •
Primary Work Responsibilities Hop is the President & Chairman of the Board of the holding company that owns all of the Hopkins Companies. “I want the Hopkins Companies to be there for our employees so that they can raise their families and be happy doing it.”
An Experiential Case Study in IT Project Management Planning 13
These companies are privately owned by his family, mostly by his mother. “Mom is a hard task-master. I am glad she lives in western Pennsylvania and we are not!” Hop is a curious mixture of figurehead for the firm and final authority for all decision-making in this collection of companies. “I like to let the companies operate for themselves. As long as they get their work done and make profits, I don’t like to get involved too much.” Hop almost always defers to the presidents of the various Hopkins Companies, especially to Larry Jordan. “Let’s see what Larry Jordan has to say about that before we make a final decision.” Public Persona & Attitudes Toward Work First and foremost, Hop is a flamboyant salesman with a big smile and a glad hand. Hop always introduces himself and shakes hands with the individuals that he encounters and projects his importance right away. “Hi! I’m Hop Hopkins. Good to meet you! You know, I bought a Lear Jet the other day and it is really nifty, and super convenient!” Hop is “old money” and very polished socially. “I’ve always loved theater and really enjoy Broadway plays. I studied DANCE in high school and college and danced in a Jazz Dance Troup when I was a kid!” He believes in self-promotion and that, by promoting himself, he is promoting H&A. “I was in the Middle East last week and Venezuela the week before. One of those Saudi princes gave me his new Mercedes limo for only $25,000. He was tired of it. The thing is bullet-proof! It should be here by the end of the week.” He pays a great deal of attention to his appearance, having only the most exquisite personal grooming and elegantly tailored clothing, even for casual wear. “Back when I dressed in business suits all the time, the TV stations never gave me a moments notice. Then I started wearing bright yellow golf slacks and flashy sport shirts and now I’m on Houston TV as an oil industry expert almost every week!” Hop is not concerned with company problems much. He has Jordan to handle those. He is a cheerleader for the firm and works to be optimistic. “We only hire the best people and we give them the best working environment and tools. Hopkins cares about its customers and its employees like family.” Personal Agendas All Hop really wants to do is play golf with a group of oil and entertainment industry tycoons. “Every year we sponsor a tent at the Doug Sanders Pro-Am Golf Tournament in Houston and hang out with Doug, and Willie and Waylon, and the boys!” Hop is insecure and believes he can never really live up to the legacy of his father. “I stood there watching them zip him up the body bag after he died and wondered what am I going to do now?” He is a hedonist who is always looking for a new girlfriend. “Hey fellas, let’s go to the club. Or let’s send the Lear Jet over to Denver to get ‘the girls’ and bring them back here to the ranch.”
Hop has always had an interest in Rocky Ridge, but she has always wanted to keep work and pleasure separated. “Boy that Rocky is one hell of a woman.”
About: Rocky Ridge Assorted Facts Rocky’s group is independent from the rest of the firm. There are no other geologists or geophysicists in the Hopkins Companies outside of her group. She does independent studies for clients as well as projects for various Hopkins subsidiaries. Last year she engaged in mapping and analyzing the subsurface structures in over 80 oil fields in Texas and Oklahoma. Her most famous (and ultimately the most successful) job was an evaluation of the oil-bearing formations beneath Lake Maracaibo in Venezuela two years ago. People are still talking about that one! • •
Primary Work Responsibilities Dr. Roxanne (Rocky) Ridge is a PhD in Geophysical Sciences from Oklahoma State with six years of experience in the petroleum industry, most of it at H&A. “Sometimes, I really love this business. It is so exciting when a well comes in!” Rocky runs a team of some 20 geophysicists, plus support personnel, that analyzes client seismic data to determine the most likely spots for successful drilling. “We use tools like a ‘black oil simulator’ that helps us to understand how petroleum will flow to the borehole and up to the wellhead at the surface. That tells us the rate of flow and how long a well is likely to produce.” Rocky is renowned for her ability to find oil or gas deposits that have been previously overlooked in older, supposedly depleted fields. “This three dimensional modeling has made a huge difference in my ability to recognize reserves in old seismic data. Hop & Larry have equipped a state-of-the art geophysics lab for us here in Tulsa, complete with the latest computers and modeling software.” Public Persona & Attitudes Toward Work Rocky is aloof and has a superiority complex about her ability to use new technology to identify reservoirs and find petroleum deposits. “This work involves a highly sophisticated understanding of geology and it is hard to explain in simple terms. We use complicated models to analyze geophysical data in three dimensions. I am afraid that it is too hard for you to understand, really. It takes years of training and experience to appreciate what we are doing here.” Rocky is an attractive woman in a position of power right in the middle of the “old boys” network that runs the oil business in Texas and Oklahoma. Most of the time she is overly serious and too severe, but she is not above flirting occasionally to gain influence. “Come on, Eddie [Dolan], you know we just have to drill in the Austin Chalk one more time. I showed you what we think is over there in the area around the Wayside-One well. Come on, Eddie!”
An Experiential Case Study in IT Project Management Planning 15
Personal Agendas Rocky was fired from her previous job and does not want to explain anything about her work to anyone that she does not control personally. She avoids giving out any information unnecessarily under the pretext that it is all too hard to understand without a PhD or years of experience in the geophysical field. “Like I told you before, this is just too complicated to explain without a lot of training in this field. Just let us take care of it. I’ll loan you one of my people to do the analytical work and you can do the programming part of it.” Rocky knows that Hop is interested in her, and she is flattered by his interest. But she is happily married to a veterinarian who was raised in College Station and she keeps Hop at a distance. “Hop is a great guy to work for. He has really supported our department with the latest gear and he has let us hire some really good people; and I really appreciated that.” Rocky is in her early thirties. She would actually like to retire from the oil business, maybe teach college, and begin a family, but she is just making too much money. “Sometimes, I think that if the pressure doesn’t let up, my husband, Bubba, and I will just drop out and go live on a beach somewhere and make babies for a few years!”
About: Ken Summers Assorted Facts HCSI has a budget of $11 million. The computer company as a separate unit makes huge profits, with profit margins regularly 65% or higher. But this is actually a “paper profit” because HCSI charges very high prices to H&A absorbing a lot of H&A’s profits. Larry believes this is a good way to deflect customer complaints that Hopkins charges too much for services. “It’s those ‘gal darned’ computer costs!” •
Primary Work Responsibilities Ken is a young reservoir engineer, a protégé of Larry Jordan. Ken is a ‘super user’ type. “I really do not know much about computers, or systems development, but Hop wants me to make sure that the HCSI and its products meet the needs of H&A’s engineering staff. And that is my role!” Ken is in charge of the computer company’s daily operations and its systems development activities. In particular, he is intending to personally supervise the development of the new PREsys software development project. “The future of reservoir engineering and economics depends upon having software that can automate projections and make the individual engineer more efficient. Getting this right is VERY important!” Ken is responsible for hiring his friend, David Gardner, as a consultant to make sure that the PREsys project turns out all right. “David is a petroleum engineer AND a programmer! He was in computer systems support for the reservoir engineering group at Shell Oil in Dallas, and he really understands what we need here.”
Public Persona & Attitudes Toward Work Ken believes that he is the ‘man of the hour,’ a sort of medieval warrior battling to provide a critical software tool for his engineering colleagues to use in their work. “A good manager can manage anything.” Ken is a kind, friendly, hardworking, and capable young man. He is barely thirty and possesses tremendous energy and dedication to work. “I was here late one Saturday afternoon last year. I was the only guy here and a FedEx delivery came. It was an envelope for Hop. I signed for it and left it on the corner of my desk overnight when I went home. Next morning, Hop showed me the contents of the envelope. It was a million dollar royalty check from one of his oil wells!” Ken was an All American golfer in college and he loves to use his golfing skills to build friendships at work. Hop adores him for his golfing. “Some weeks, I cannot get any work done because Hop keeps calling me to go golfing with him and some of our clients, or potential clients, and I have to go do it!” Personal Agendas In his heart, Ken only trusts engineers. He believes that other people, no matter how good, cannot measure up to the standards of the engineering profession. “The Professional Engineer (PE) designation is something that only the very best can aspire to. Engineering professionalism means that engineers are the best people to work with.” Ken wants to get out of the computer company as soon as possible and get back to the engineering work that he loves best. “I know the people in this company and I know what they need from this software. So Hop and Larry trust me to get the computer company working smoothly. But I certainly will be glad when this job is finally done.” More than anything, Ken wants to justify Larry’s trust in him. He gives ‘lip service’ to Hop, but he knows with certainty that Larry is THE man at Hopkins & Associates. “I think that loyalty is the most important trait that an employee can possess. Larry is loyal to Hop and to the memory of Hop’s father. Without Larry, this company would have some serious difficulties.” Ken really does not respect Hop. He thinks Hop is an overgrown adolescent. “When we have the company Christmas & New Years parties, I like to stay close to my wife and kids. Then we go home before Hop starts getting wild.”
An Experiential Case Study in IT Project Management Planning 17
elevating the stature of his group (and of him) in the opinions of senior K&A managers. He is very pleased by this prospect. •
Primary Work Responsibilities Gerald Bye runs the systems development function in the computer company. He is a rather dull, methodical man in his fifties. “I just want to do my job as well as I can and spend my evenings with my kids. I have two boys in their early teens. You know, little league and all that.” Jerry is experienced in the oil industry and he is well-grounded as a software development manager. He has both a solid technical education and extensive project experience. “I am confident that we can develop this system in-house if Hop wants to, but I also understand that the reasoning behind getting a consultant with the engineering expertise to do the job.” Public Persona & Attitudes Toward Work Jerry is a classic fire-fighter. He is poor at anticipating what might go wrong and planning for contingencies. He is very good at managing emergencies as they happen. He drives his employees crazy more often than not. “Can you guys stay late tonight for a couple of hours? We had an outage on the simulator application today that affected the Tulsa office and we need to run some tests and do some outstanding maintenance. We need to have this problem fixed for Tulsa by tomorrow.” Jerry is convinced that he is underpaid. He has been reading ComputerWorld and has seen the latest salary statistics for the IT industry. His salary is $9,000 below the numbers quoted there and he is unhappy about it. “I wish Hop would do a study of the salaries here in the computer company and determine if we are keeping up with the market. If we fall too far behind, we will start to loose some of our technical people! And the best ones will be the first to go!” Personal Agendas Jerry is frustrated by the arrogance of the engineers, with the engineers subtly intimating that his performance would be better if he were an engineer. “They don’t think anyone can do a good job if he is not an engineer. It is a cultural thing; they just feel it in their bones. I just want to ‘keep my head down’ and ride this job until retirement.” Jerry complains that his staff is not being allowed to tackle the PREsys project on its own, but secretly, he is delighted to have some one else assume the risk of this project. “I am looking forward to working with David and his programming team. We will certainly stand ready to give them any help that we can during this critical project.” Jerry discounts the idea that the profits of HCSI are a bookkeeping gimmick and he believes deeply that the staff at HCSI is not really given sufficient credit (or rewards) for the tremendous profit contribution that the computer company makes toward the Hopkins Companies bottom line. “These engineers just don’t understand how hard it is to keep all of these systems working. We make their lives a lot easier with our technology!”
About: Mary Nunn Assorted Facts Mary’s group handles about 35 calls per day during a typical workweek, with peaks usually happening on Mondays. The K&A engineers often work on weekends and problems accumulate over the weekends for Monday resolution. On Mondays, there are often fifty or sixty calls to be handled. Most calls deal with routine processing. For example, a request to expedite processing for a particular set of analyses or to find a set of historical cases and load them to the system are common. Some calls are reports of outages or processing errors that need further technical assistance and these must be referred to the proper support staff for resolution. New systems packages tend to cause a lot of commotion. But once the staffs in the engineering offices are trained, those kinds of questions are minimal. •
Primary Work Responsibilities Mary is in charge of technical and user support for the computer company. Her group of about ten support staff operates a help desk, mostly answering ‘how to’ questions about software packages or logging system problems reported by users for later resolution by the programming staff. “We are the first line of defense, so to speak. The new PREsys software will need to be supported by us and we are very interested in making it user friendly and intuitive for the engineers to use!” As a divorced mother of two grown children, Mary is, in effect, married to Hopkins Computer Systems, Inc. “Some days, I arrive here at the office at 5:00 AM and I go around to the desks of my staff and leave notes for them about ideas that I have had the night before. I think they find it stimulating to come in and find a note from me!” The other function that is Mary’s responsibility is training for users of the applications packages that reside on the Hopkins servers. “I have a lot of expertise as a trainer. We are going to need PREsys training manuals, user guides, and presentation materials to use in rolling out this system for the engineers to use once the development is completed. We will have to work with the project team to assure that these items are available in a timely fashion.” Public Persona & Attitudes Toward Work Mary is defensive about not having a college education. “I worked for Jack Crocket in engineering at Hopkins in Tulsa for 18 years before coming to the computer company here in Houston. I was his number one assistant and I really learned a lot about working with engineers there! Sometimes good work experience is more valuable than a lot of impractical book-learning!” Mary would often rather stay in her office than head home each evening. There is plenty of work to do and she thrives on getting things done. “There are many of us who work late here. The freeways are crowded during the rush hour so it makes no sense really to leave until rush hour is over. We get a lot of work done after hours when the phones stop ringing! Of course, Hop provides an open bar
An Experiential Case Study in IT Project Management Planning 19
for employees after 5:00 PM every day. So, everyone has a drink before heading home anyway.” •
Personal Agendas Mary is secretly in love with Jack Crocket and was crushed when Hop brought in Tommy Smith to be VP for Reservoir Engineering instead of promoting Jack to the job. “I think Jack Crocket is the best engineer in the company and the finest man I ever knew. If you need help with the specifications for PREsys, I suggest you talk to Jack. You cannot do any better than working with Jack on this. I can assure you!” Mary hungers for recognition and reward for her dedication and hard work for the firm. “I think we should do something special for the people who work the PREsys project to recognize their contributions as team members on this critical project.”
Jerry Bye, and Mary Nunn) have a set of materials (including secret personal goals and agendas) that are not known to the rest of the class (the interviewers). These materials are intended to inform both their content responses and their behaviors during the interviewing. Obviously, the quality of this case experience depends in large part on who among the students in a class might be chosen to play these key roles. The instructor should choose carefully in assigning these roles, generally choosing the more outgoing and academically stronger students who can memorize and deliver the facts necessary and perform the acting roles needed. It is generally a lot of fun. The interviewers are required to meet in teams and develop questions for the interviewees. Those who play one of the key roles for one interview generally join an interviewing team for one of the other roles. Interview teams elect a leader who is responsible for creating the interview agenda. Each member of the class must be sure to document the information that comes out of each interview carefully. The role playing in this case involves five interviews, and therefore, fills a lot of classroom time. With students playing key roles, there is always the chance that an interview will fall flat or a student will miss class, or whatever. This could be a major problem, and the instructor will want to guard against potential problems. There is a way to do this. It should be noted that Larry Jordan is not among those included in the interviewing process, even though he should be. The basic story is that Larry Jordan (who is the real power behind Hop’s throne) is out of town, in the Middle East working on a deal. On the night of Hop’s interview, the instructor comes to class in cowboy (or cowgirl) clothes, at least a big hat, and plays the role of Larry Jordan who has returned unexpectedly from overseas travels and would like to sit in on the meeting. (If the instructor is female, then the full name is “Larry Ann Jordan” and “Daddy wanted a boy!”) This ploy is a surprise to the class, loosens up the role players, and allows the instructor to inject new information into the interview and to control the first interview while the students figure out how it will work. Having established this expectation, the instructor can assume the identity of Larry Jordan at any time throughout the subsequent interviews if there are problems.
Information Systems in a Small Non-Profit Organization Susan J. Chinn, University of Southern Maine, USA Charlotte A. Pryor, University of Southern Maine, USA John J. Voyer, University of Southern Maine, USA
Non-profit organizations such as this Center rely to a large extent on individual donations; successful fundraising necessitates tracking the funds that are received from each donor (Klein, 2004). Information is also needed to evaluate the effectiveness of each fundraising event. Non-profits often use specialized fundraising software for this purpose (Davis, 2002). In addition, the organization’s accounting software must be set up so that donated funds or grants that are restricted in purpose can be carefully monitored to ensure that the funds are used only for their designated intentions (Gross, Larkin, & McCarthy, 2003). The Board of Directors typically wants expense information organized in the same way as the budget is prepared, generally by expense category such as salaries and rent, to ensure that the organization’s expenses do not exceed the board’s authorizations (Trussel, Greenlee, Brady, Colson, Goldband, & Morris, 2002). In addition, funding organizations such as United Way and government agencies, such as the IRS, require that expenses be reported by program type. Many non-profits thus have greater record-keeping requirements than do for-profit enterprises of comparable size (Cutt, Bragg, Balfour, Murray, & Tassey, 1996).
in each program. Dawn believed that they were not taking full advantage of Paradigm’s features. They liked QuickBooks, but they wanted to see if there were features in it that they could be using better, or if there was a way for the two packages to work together. This is a concern that many non-profits have when they attempt to use off-the-shelf software (Jones, 2000). Dawn noted that they were currently entering duplicate data in both systems. The second issue concerned reports that the Center was obligated to produce for the board of directors and United Way. As is true of many non-profits, the Center received revenue not only from hundreds of individual and corporate donations, but also from various funding sources, such as United Way. These fund providers need detailed information about how the money is spent (Bradley, Jansen, & Silverman, 2003). Nonprofits also require accurate financial and program data in order to plan for future growth — an especially critical need for organizations like the Center with limited technological expertise (Smith, 2002). The third issue involved a database project that Dawn was starting. She wanted to design a family database that would allow the Center to track bereavement meetings and to generate statistics. The database would be useful to the Center, and it would also facilitate reporting to United Way. For example, United Way required demographic data that the Center staff had had trouble compiling other than manually (Cutt et al., 1996). Elizabeth indicated that the Center was growing, and there was a need to prepare for the growth they were expecting. Tucker and Powell were offering their services pro bono, but they did ask Elizabeth if they should be mindful of any financial constraints when developing recommendations for purchasing software or equipment. Elizabeth said not to worry about it. Elizabeth also asked for confidentiality when viewing any information about their clients. However, she and the rest of the staff agreed to be very open about sharing documents and other agency information with the university team. After their meeting, Tucker and Powell immediately drafted a letter to Elizabeth, which in essence was the consulting contract (Block, 2000). The agreement was to review their software, reports, and database design, and to evaluate alternative approaches to the integration and growth issues that had been discussed. The letter stated that the Center’s current environment would be assessed through interviews, document collection and review, and an analysis of their computer systems. The project, starting in July, was to terminate in the fall with a formal report of recommendations.
CASE DESCRIPTION Gathering Fundraising & Accounting Software Requirements
were many functions in Paradigm that were not being used, and that it was hard to figure out which fields would print out. She also had to resort to calculating many aggregate functions (e.g., counts, averages) manually. When the team asked if she consulted the user’s manual to help solve some of these problems, she remarked that the user’s manual was missing. Concerning vendor support, Vicki said she believed they paid a flat rate for a specific number of calls per year, and to ask Dawn for details. During a subsequent meeting with Dawn, Vicki presented the team with a Paradigm query she had tried to run without success. Vicki also said that she wanted an audit trail of who made each update to the data in Paradigm to see who might be to “blame” for poor data entry. Later, back at the university, the team re-ran the query using test data on their office computer and found that it worked perfectly. During their next visit, they examined some of Paradigm’s features with Dawn and found an error log tool. The error log displayed error codes and SQL statements that appeared to fail because of data entry problems (e.g., attempts to put two values into a single-valued field). Dawn learned that she could contact the vendor to acquire SQL scripts that could be run to “kick out” problem data. Dawn had never previously looked at the error log. Tucker and Powell asked Dawn for more information about Paradigm documentation and user training. First, the team inquired about the missing Paradigm manual; Dawn said it was available, and that Vicki would often say that something was missing when, in fact, it was not. Dawn loaned the manual to the team and the CD-ROM updates. The CD had updated pages that were supposed to be printed out and inserted into the manual, but although the pages had been printed out, Dawn lacked the time to insert the pages. It was later discovered that the CD actually had the complete manual on it, but that several computers (including Vicki’s) lacked CD-ROM drives. The team then asked about the Paradigm training Dawn had received. Training is often a major component in successful technology adoption, but can often be a shortchanged component in a resourceconstrained non-profit organization (Barrett & Greene, 2001; Hecht & Ramsey, 2002; Light, 2002). Staff personnel need to be cross-trained in the event that one staff member should leave. The staff also needs ongoing training to take advantage of upgrades in software, hardware, networking, etc. (Smith, Bucklin & Associates, Inc., 2000). Both Dawn and Alicia had taken a training course. The training emphasized building queries that served as the basis for generating reports. Dawn told the team that she could call the vendor for unlimited support and that all of the updates came bundled with the support contract.
to be distinguished from program expenses (see Appendix E for a sample United Way report). The Center wanted to set up the accounts in QuickBooks so that the data could be retrieved more easily by both functional classification and program. In addition, Elizabeth explained that increasing reliance on grants meant that in the near future, the Center would need to start tracking the use of restricted purpose funds across fiscal years. There was no one on the staff who knew how to do this in QuickBooks.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
Given the number of technology issues they found, Tucker and Powell were impressed with the way the Center’s staff managed to keep their operations running as smoothly as they did. They were further impressed when they saw the hardware the Center was using. The file server was a donated PC that was rapidly running out of disk space. During the team’s visits, Dawn was able to acquire another PC that she hooked up to the first one as a cluster. Most of the computer equipment in the Center had been donated; in fact, one of the team members wound up donating her five-year-old computer, and it was gratefully accepted. They were particularly impressed with the staff, especially Dawn. Not only was Dawn going to school and working at the Center part-time, she was also heavily involved (as were the rest of the staff) in fundraising activities. She even served as a bereavement support facilitator. As Tucker and Powell studied the Center, they kept thinking about what they would put into their final report of recommendations, and what kinds of reactions they would receive. The Center, as an organization with a small staff and limited resources, was acting as an “adopter” of technology; that is, the type of organization that is simply operating in a survival mode and “making do” with existing technology (Fried, 1995). A major challenge facing many consultants working with non-profits is that they encounter a culture where non-profits shortchange themselves on technological resources, so that they can devote most of their resources to the primary mission (McCarthy, 2003). Tucker and Powell also knew that some of the Center’s staff was aware that technology might be better harnessed to support their mission activities, but the staff did not have a plan in place for realizing this goal. Some of those technology goals would mean that the Center would experience radical changes in its business processes (Amis, Slack, & Hinings, 2004). Was the Center ready for change? The team also reflected on how they might evaluate their own efforts as consultants and as change agents (Bergholz, 1999). They had tried to function in the role of facilitator, which involved empowering clients to “own” changes in technology use and to mobilize and support client initiatives (Winston, 1999); however, they wondered if instead their role had been more of a traditional one or as an advocator. In the traditional role, consultants functioning as change agents focus on the implementation of technology, while advocators create a business vision and pro-actively function as champions for change (Winston, 1999). Was their position as facilitator a good match for a client in the adoptive mode? Had they been effective consultants to the Algos Center to date? How could they be useful to the Center in the future?
Block, P. (2000). Flawless consulting. San Francisco: Jossey-Bass. Bradley, B., Jansen, P., & Silverman, L. (2003). The nonprofit sector’s $100 billion opportunity. Harvard Business Review, 81(5), 3-11. Burt, E., & Taylor, J. A. (2000). Information and communication technologies: Reshaping voluntary organizations? Nonprofit Management and Leadership, 11(2), 131-143. Chappell, B. J. (2001). My journey to the Dougy Center. Amityville, NY: US Baywood Publishing. Coates, N. (1997). A model for consulting to help effect change in organizations. Nonprofit Management and Leadership, 8(2), 157-169. Cornforth, C. (ed.). (2003). The governance of public and non-profit organizations: What do boards do? London: Routledge. Cutt, J., Bragg, D., Balfour, K., Murray, V., & Tassie, W. (1996). Nonprofits accommodate the information demands of public and private funders. Nonprofit Management and Leadership, 7(1), 45-68. Davis, C. N. (2002). Look sharp, feel sharp, be sharp and listen — Anecdotal excellence: People, places and things. International Journal of Nonprofit & Voluntary Sector Marketing, 7(4), 393-400. Fried, L. (1995). Managing information technology in turbulent times. New York: John Wiley & Sons. Gross, M. J., Larkin, R. F., & McCarthy, J.H. (2003). Financial and accounting guide for not-for-profit organizations (6th ed.). New York: John Wiley & Sons. Hecht, B., & Ramsey, R. (2002). ManagingNonprofits.org. New York: Wiley. Jones, R. A. (2000). Sizing up NPO software. Journal of Accountancy, 190(5). 28-44. Klein, K. (2004). Fundraising in times of crisis. San Francisco: Jossey-Bass. Light, P. C. (2002). Pathways to nonprofit excellence. Washington, DC: Brookings Institution Press. McCarthy, E. (2003, August 18). A confluence of technology and philanthropy. The Washington Post, p. E5. Rubin, S., & Witztum, E. (eds.). (2000). Traumatic and non-traumatic loss and bereavement: Clinical theory and practice. Madison, CT: Psycho-social Press/International Universities Press. Smith, S. R. (2002). Social services. In L. M. Salamon (Ed.), The state of nonprofit America (pp. 149-186). Washington, DC: Brookings Institution Press. Smith, Bucklin & Associates, Inc. (2000). The complete guide to nonprofit management (2nd ed.). New York: Wiley. Stroebe, M.S., Strobebe, W., & Hansson, R. O. (Eds.). (2002). Handbook of bereavement. Cambridge: Cambridge University Press. Tribunella, T., Warner, P. D., & Smith, L.M. (2002). Designing relational database systems. CPA Journal, 72(7), 69-72. Trussel, J., Greenlee, J. S., Brady, T., Colson, R.H., Goldband, M., & Morris, T. W. (2002). Predicting financial vulnerability in charitable organizations. CPA Journal, 72(6), 66-69. Winston, E. R. (1999). IS consultants and the change agent role. Computer Personnel, 20(4), 55-73.
APPENDIX C Sample Paradigm Report Donors who gave amounts (gifts or pledge payments) greater than or equal to $50, between 7/1/00 and 6/30/02 (for the 2001/2002 Annual Appeal, AA01, or Core Support 01 Campaigns Donor Name
678 Long Lane 1040 Taylor Susie Smith Road 123 Main Mr. & Mrs. Thompson Street 124 Main Mr. & Mrs. Thompson Street
Gift/ Pledge Payment
Gift Amount Date
Pledge $200 3/13/2002 Payment 4/20/01 Gift 11/20/00 Gift Pledge 5/4/01 Payment
Campaign Ann. Appeal 01/02 AA01 Ann. Appeal 01/02 Core Support 01
APPENDIX D Sample Expense Report Jul-03
Expense 60000 -
60100 - Salaries and Wages 60105 - FICA 60115 - SUTA 60125 - Health Insurance 60130 - Simple IRA Contribution 60140 - Other Insurances Total 600000 - Personnel
60390 - Other Office & Admin Total 60300 - Office & Administration
APPENDIX E United Way Report FISCAL YEAR _______ EXPENSES FOR CURRENT YEAR ALLOCATION
PROGRAM SERVICES 4. 5. 6.
22. Salaries 23. Employee benefits 24. Payroll Taxes, etc. 25. Professional Fees 26. Supplies 27. Telephone 28. Postage and Shipping 29. Occupancy 30. Rental/Maintenance of Equipment 31. Printing and Publications 32. Travel Conferences 33. Miscellaneous (Include Insurance here) 34. Payments to Affiliated Organizations 35. Depreciation* 36. Major Equipment/Mortgage* 37. SUBTOTAL EXPENSES 38. Less Expenses for activities financed by: Restricted Revenue……………………. Depreciation Expenses*………………. Major Equipment/Mortgage*………….. 39. TOTAL EXPENSES FOR ACTIVITIES FINANCED BY UNRESTRICTED FUNDS (LINE MINUS LINELine 38)39) DEFICIT/SURPLUS 40. (Line 21,37 page 6 minus
$0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00 *United Way does not fund these costs. However, for financial reporting purposes, please include them on this line, then remove them in Line 38.
APPENDIX F Dawn’s Data Model Tables and Relationships
Field Breakdown: Caregiver Table Family Id Primary Affiliation - combo box Adult 1 Last Name Adult 1 First Name Adult 1 Telephone Adult 1 Employer & Title Adult 1 Gender Adult 1 Nationality Adult 1 Date of Birth Adult 2 Last Name Adult 2 First Name Adult 2 Telephone Adult 2 Employer & Title Adult 2 Gender Adult 2 Nationality Adult 2 Date of Birth Address City
State Zip Code Email Address Intaker 1 - links to Volunteer Id Intaker 2 - links to Volunteer Id Intake Date Appropriate for Groups - Yes/No If Yes, which group - combo box If No, reason why - combo box Referral Source Currently Attending - Yes/No Start Date 1 End Date 1 Start Date 2 End Date 2 Current Night of Service Current Co-Facilitator 1 - links to Volunteer Id
Current Co-Facilitator 2 - links to Volunteer Id Previous Night of Service Previous Co-Facilitator 1 - links to Volunteer Id Previous Co-Facilitator 2 - links to Volunteer Id Name of Deceased Relationship to Deceased Cause of Death Date of Death Date of FirstCall - could link to telephone log Wait List - Yes/No Policies given - Yes/No Mailings to receive - combo box Relation to children attending combo box
APPENDIX F (cont.) Field Breakdown continued: Children Table Family Id - links to caregivers Primary Affiliation - combo box Last Name First Name Intaker 1 - links to Volunteer Id Intaker 2 - links to Volunteer Id Intake Date Appropriate for Groups - Yes/No If Yes, which group - combo box If No, reason why - combo box
Currently attending - Yes/No State Date 1 End Date 1 Start Date 2 End Date 2 Current Night of Service Current Co-Facilitator 1 - links to Volunteer Id Current Co-Facilitator 2 - links to Volunteer Id
Previous Night of Service Previous Co-Facilitator 1 - links to Volunteer Id Previous Co-Facilitator 2 - links to Volunteer Id Name of Deceased Relationship to Deceased Cause of Death Date of Death
State Date 1 End Date 1 Previous Night of Service/Volunteering 1 Previous Co-Facilitator 1 - links to Volunteer Id Start Date 2 End Date 2 Previous Night of Service/Volunteering 2
Current Co-Facilitator 2 - links to Volunteer Id Start Date 3 End Date 3 Date of Birth Gender Nationality Employer & Title Referral Source In-Services Attended - text area
City State Zip Code Telephone 1 Telephone 2
Email Address Type of Caller Type of Service Provider Referral Source
Volunteer Table Volunteer Id - links to group members Primary Affiliations - combo box Volunteer Preference Other Interests - combo box Training Date Current Night of Service - combo box Current Co-Facilitator - links to Volunteer Id Current Age Group
Telephone Log Table Last Name First Name Relation to Grieving Chile Business/Organization Name Address
APPENDIX G Center Phone Log PHONE LOG Date______________________ Time______________________ Verbal Program Info.
Referral to our program
Referral to other center
Referral to other resource
Family Agency/ School Individual Volunteer Other Center Student Other
Social Construction of Information Technology Supporting Work Isabel Ramos, Universidade do Minho, Portugal Daniel M. Berry, University of Waterloo, Canada
In the beginning of 1999, the CIO of a Portuguese company in the automobile industry was debating with himself whether to abandon or to continue supporting the MIS his company had been using for years. This MIS had been supporting the company’s production processes and the procurement of resources for these processes. However, in spite of the fact that the MIS system had been deployed under the CIO’s tight control, the CIO felt strong opposition to the use of this MIS system, opposition that was preventing the MIS system from being used to its full potential. Moreover, the CIO was at lost as to how to ensure greater compliance to his control and fuller use of the MIS system. Therefore, the CIO decided that he needed someone external to the company to help him understand the fundamental reasons, technical, social, or cultural, for the opposition to the MIS system.
Social Construction of Information Technology Supporting Work 37
THEORETICAL BASIS FOR THE STUDY
Innovative, organization-transforming software systems are introduced with the laudable goals of improving organizational efficiency and effectiveness, reducing costs, improving individual and group performance, and even enabling individuals to work to their potentials. However, it is very difficult to get these software systems to be used successfully and effectively (Lyytinen et al., 1998; Bergman et al., 2002). Some people in some organizations resist the changes. They resist using the systems, misuse them, or reject them. As a result, the goals are not achieved, intended changes are poorly implemented, and development budgets and schedules are not respected. Misguided decisions and evaluations and less than rational behaviour are often offered as the causes of these problems (Norman, 2002; Dhillon, 2004). Bergman, King, and Lyytinen (2002) observe (p. 168), “Indeed, policymakers will tend to see all problems as political, while engineers will tend to see the same problems as technical. Those on the policy side cannot see the technical implications of unresolved political issues, and those on the technical side are unaware that the political ecology is creating serious problems that will show up in the functional ecology.” They go on to say (p. 169), “We believe that one source of opposition to explicit engagement of the political side of RE [Requirements Engineering] is the sense that politics is somehow in opposition to rationality. This is a misconception of the nature and role of politics. Political action embodies a vital form of rationality that is required to reach socially important decisions in conditions of incomplete information about the relationship between actions and outcomes.” The implementation of complex systems, such as enterprise resource planning (ERP) systems, are rarely preceded by considerations about: •
• • •
the system’s degradation of the quality of the employees’ work life, by reducing job security and by increasing stress and uncertainty in pursuing task and career interests (Parker & Wall, 1998, pp. 55-70; Davidson & Martinsons, 2002; Thatcher & Perrewé, 2002); the system’s impact on the informal communication that is responsible for friendship, trust, feeling of belonging, and self-respect (Goguen, 1994; Snizek, 1995; Piccoli & Ives, 2003); the power imbalances the system will cause (Bergman et al., 2002; Dhillon, 2004); and the employees’ loss of work and life meaning, which leads to depression and turnover (Parker & Wall, 1998, pp. 41-49; Bennett et al., 2003; Davison, 2002).
Recent work by Marina Krumbholz, Neil Maiden, et al. (2000) considers some of these issues after implementation of ERP systems. Specifically, this work investigates the impact on user acceptance of ERP-induced organizational transformation that results from a mismatch between the ERP system’s actual and perceived functionalities and the users’ requirements, including those motivated by their values and beliefs (Krumbholz et al., 2000; Krumbholz & Maiden, 2001). This case study describes an on-site examination of one particular ERP-induced organization transformation. The prime champion of the ERP system in one company was surprised by the resistance to the system’s use shown by the employees of the company.
He ended up asking the help of the first author of this case study to understand the sources of this resistance and what to do about it. The present report is a distillation of the first author’s final report to the champion and of her PhD dissertation (Ramos, 2000). The focus of the study is on understanding the technological, social, and cultural reasons of the employees’ resistance against the ERP.
When Pedro first talked with Isabel, he told her about an important movement within ENGINECOMP against the use of MaPPAR to support production management. MaPPAR was very versatile and included modules that could support engineering tasks, order processing, production planning, assessment of production capabilities, production management, product shipment, management of stocks, accounts payable to suppliers, accounts receivable by clients, and finances and accounts management. While MaPPAR was a very complete and powerful system, it was very poorly used. Employees ignored or resisted using much of MaPPAR’s functionality, preferring to develop their own small systems and databases to manage only the information relevant to their daily tasks, and used the central MaPPAR database only as the source of data to feed their own small systems and databases. Moreover, the employees of the plant were refusing to input timely data about the tasks they perform. They preferred to defer their inputting until the end of the week or the month, so they would not lose time during the days. Sometimes, one of the employees was freed by his colleagues from his normal duties in order to input his and his colleagues’ data. The problem with this practice was that it became virtually impossible to track down the state of the orders in production. Pedro and Isabel agreed that Isabel would do an on-site study of ENGINECOMP at work, in search of the fundamental reasons, technical, social, or cultural, for the opposition to MaPPAR and for the proliferation of small systems and databases. Isabel would study the work of two departments in ENGINECOMP: the Finance Department and the Logistics Department. They were the most influential departments of the company, since they performed essential activities for the company. The Finance Department did financial management, and the Logistics Department did customer service and production planning. Moreover, the employees of these two departments constituted a majority of MaPPAR’s users.
Isabel spent five months at ENGINECOMP, observing and interviewing all the employees of the Finance and Logistics departments. Almost every day, Isabel spent several hours joining or observing employees performing their tasks with or without support from MaPPAR. She interviewed Carlos and Manuel, the directors of the departments. She also interviewed several middle managers responsible for key activities, including Fernando, Carlos’s closest collaborator in the Finance Department, and Roberto, Eduardo, and António, the managers of the Customer Service, Production Planning, and Purchasing divisions of the Logistics Department. Isabel also observed and interviewed Pedro and his collaborators in the IS Department. She talked also a few times with the German leader, Fritz and she was present at events such as meetings and training programs. To learn more about MaPPAR, Isabel consulted the available manuals and technical documentation. She used a demonstration version of MaPPAR to test some of its functionality on her own. The following is a department-by-department summary of Isabel’s observations.
Social Construction of Information Technology Supporting Work 41
The Finance Department was responsible for all financial and accounting tasks of the company. Carlos, the director of the Finance Department, had sole directorial responsibility for the department. However, he delegated supervision of the accounting tasks to Fernando, a trusted employee with an accounting degree. Fernando had access to key information and knowledge for performing these accounting tasks. Fernando’s access and knowledge, charisma, and core skills made him a privileged ally of Carlos, the director. As mentioned, the coordination and control of the Finance Department activities were responsibilities of Carlos, the director, who kept and centralized all decision making. Carlos was a Brazilian sent to ENGINECOMP by the Brazilian company’s administration. Fritz, the new German leader, was not very comfortable with Carlos’s complete control of the Finance Department. In the Finance Department, informal communication about work tasks was discouraged. Carlos’s rule was that all communication must be well documented. The work in the department was organized into well-defined tasks connected by clearly defined processes, all of which determined the precise responsibilities of each employee. Employees only occasionally received professional training. The heavy workload left little time for such training. Moreover, Carlos believed that each employee must perform simple and repetitive tasks that can be learned by doing. The tasks were distributed among eight employees, each with limited autonomy. Carlos had his trusted assistant Fernando, with the accounting degree, supervise the daily routine. Fernando was seen as a bright and ambitious young man. He was expected more to comply with rules and procedures than to make autonomous decisions. Fernando worked hard to serve Carlos’s interests, taking advantage of what he had learned in courses leading to his degree to further those interests. Fernando’s actions and information were especially useful to Carlos, who as a Brazilian, was not as familiar with Portuguese accounting regulations as a Finance Department director should be. It is I who signs the accounts, the financial reports, and the balance sheets. It is I who signs the fiscal statements. I am the officer responsible for the company’s accounting. My accounting degree and my understanding of Portuguese law allow me to give invaluable assistance to the director. — Fernando With the help of Fernando, Carlos asserted the strong leadership that Carlos believed was the guarantor of efficiency and motivation. Compliance with Carlos’s leadership was the main criterion by which Carlos recruited employees for the Finance Department. Interviewed Finance Department employees mentioned some competition for career advancement. Finance Department employees were rewarded for accurate completion of assigned tasks. Failure to comply with rules and procedures was punished. There was a predominant belief among the employees that autonomous and creative employees were dangerous in a finance department. This belief was supported by past events during which fraud was perpetrated. This belief was one of the most important sources of the motivation to use MaPPAR, which was seen as reinforcing established practices:
Social Construction of Information Technology Supporting Work 43
trainee employees through only the functionality actually being used. The trainers usually advised trainees to avoid other MaPPAR functionality, saying that it would be too complex, inadequate, or better implemented in the programmed spreadsheet. All that I know about the system is the result of my efforts to use it. But we have too many tasks to perform. There is no time left to explore the system, which has too many complexities. And I say to my colleagues that they should not waste too much time experimenting if they are not sure of the usefulness of a menu’s options. — Fernando The Finance Department employees saw this in-house training as a burden. Moreover, Carlos did not encourage his employees to experiment with MaPPAR themselves. The Finance Department’s history included some catastrophic mistakes, and the department rumor mill still propagated stories about the stupid mistake that Employee A or Employee B had made. To many Finance Department employees, the spreadsheet appeared much easier and less constraining than did MaPPAR. All requests for required new or enhanced features for MaPPAR were sent to the IS Department for implementation. However, many times the IS Department’s response was to deny a request because of the high costs of the new or enhanced feature. The IS Department did not want to have to make the same changes to every new version of MaPPAR that would come out in the future. When a required feature was refused by Pedro, the director of the IS Department, the requesters were able to easily find implementers among their colleagues, who had accepted the informal responsibility of programming the spreadsheets and databases. In general, Finance Department employees acknowledged the importance of MaPPAR for their department, since MaPPAR’s integrated modules automatically fed all the accounts. The institutional authority of the IS Department to define MaPPAR’s IT support was well accepted.
Social Construction of Information Technology Supporting Work 45
The Logistics Department office space was designed as an open space into which any ENGINECOMP employee could enter to seek a problem solution or to demand a service. The Purchasing Division and the Production Planning Division managers often complained about deleterious effects and the pressure caused by the constant interruptions that the open space invited: There is always pressure from the exterior of the department. This is a drawback of the client-supplier policy that was adopted internally, this pressure to be continuously available in open space. We are constantly being interrupted! It is not possible to focus our attention on a single subject for long and to follow logical and correct reasoning. — António Adding to this pressure were the variable and often unexpected requests and problems posed by suppliers, clients, and ENGINECOMP’s production plant. For example, as mentioned, the quantities ordered by a costumer could be changed within an agreed upon percentage after the order was placed and production had started. Logistics Department employees saw flexibility of work practices and procedures, autonomy of decision, and informal communication channels and work relations as key factors to reduce the negative impacts of these sudden changes. Moreover, they saw MaPPAR as reducing their flexibility of action and making their work less interesting. Furthermore, MaPPAR was forcing them to comply with rules and procedures that reduced their ability to fulfill the needs of ENGINECOMP’s plant and customers. Some of these Logistics Department employees referred to MaPPAR as a “sacred cow’” that could not be questioned and that made them “slaves,” requiring most of their time to input data without “receiving anything in return.” These employees complained also of the poor quality of the MaPPAR reports. They had become highly suspicious of the data stored in the central database. Because people won’t be wasting their time to explore this and that. Why should they? When I want a report and this [MaPPAR] does not give me what I want ...! If only they [IS Department] made changes to the system! Meanwhile, someone [from the IS Department] decides not to make them. It is frustrating. — Eduardo As in the Finance Department, the reports needed to support Logistics Department decisions were created in spreadsheets, using data obtained by querying the central MaPPAR database. This practice fostered a disconnection between the inputting of data and the production of reports to support decision making. The Logistics Department employees saw MaPPAR as too complex and too general to effectively support the details of the department’s activities. Logistics Department employees agreed with Finance Department employees about the lack of MaPPAR use training and the burden caused by having to train new employees in the use of MaPPAR. I find the system too confusing. It is a tool, since it is a standard in a lot of business areas. I think it does not really help the specific tasks of a specific company in a small country like ours. — António
Also, the Logistics Department employees decried the IS Department’s lack of support and understanding. In response to this complaint, the Logistics Department developed what Roberto believed was a very effective strategy: I have been doing [developing any needed functionality] by myself. When they [the IS department] do not want to make the [required] changes to the system, I develop the feature using the spreadsheets and databases. Nowadays, I do not even ask; I do by myself and help others with the programming. — Roberto And he added: The lack of support from the IS department is creating disinterest in the system. The removal of MaPPAR was seen as an important step to gain more control over the performed work. Employees wanted to be involved in (1) defining the work practices, (2) decision making, (3) monitoring the system usage, and (iv) defining what were good and bad usages. This plant is six years old now. We have learned a lot. We want to be heard! We need a new system, but this time, we want to be involved! — Roberto
The IS Department
As mentioned, Pedro was the director of the IS department. He kept close at hand the responsibility to define the operational and management best practices and to ensure that the information systems were effectively supporting those practices. Also mentioned, Pedro was very proud of his professional advancement and his contributions to the success of ENGINECOMP. He considered himself responsible for the formulation of creative solutions in both work practices and information systems. He considered that implementing these solutions was a task to be performed in collaboration with the affected departments. Pedro did not consider himself to be an IS technical expert. His closest collaborator was the person that knew the used systems in detail and would do all the programming, parameterization, and users’ support. This collaborator, José, was a timid man with strong technical skills; he was very competent, but was hardly considered a leader of opinion and action. He [José] is very competent. I trust all technical tasks to him — Pedro José works hard. He always treats us right. The only thing is ... Well, he is very silent [smiling]. — Roberto José was well accepted by employees from the other departments. These other employees understood the difficulty José had in providing immediate support to all MaPPAR users. José’s colleagues from the other departments found José easier to approach than Pedro and they preferred to talk with José first when a problem arose or when a change to MaPPAR was needed. However, these employees knew also that José’s
Social Construction of Information Technology Supporting Work 47
actions were constrained by IS Department policies and his manager Pedro. At one point, the managers of departments other than the IS Department were informally discussing the role of the IS Department. A clear line was emerging between those that supported Pedro’s interventional attitude and those that supported José’s helpful and cooperative attitude. Pedro was aware of this discussion but showed no resentment towards José. I know that they [the other employees] would like to get rid of me [laughter]. I defend the company interests too much. Of course, some other people around here understand that I have dedicated my life to this company. This happens everywhere and is tough. I do not believe José could handle all this. — Pedro Pedro saw José as a very good programmer that would never be able to carry out the negotiation and political battles inherent in Pedro’s, the IS director’s, job. Moreover, Pedro knew that José was oblivious to the discussion. Pedro understood this discussion as an expression of the growing resistance to his own actions. What Pedro really resented was that the German administration and Fritz, in particular, were listening to these dissenting voices more and more. The Brazilian administration understood. They hired me for my competence, and they saw what I did. The new administrator does not know. — Pedro
The growing trend to use spreadsheet programming to get around the problems caused by MaPPAR or to implement functionality perceived as not available in MaPPAR was also reducing the data’s quality. Often, results of outside data processing were not fed back into MaPPAR. Important MaPPAR functions, such as requirements planning and capacity planning, were never utilized and were instead programmed outside MaPPAR, resulting in unnecessary maintenance costs, lack of control over planning and its results, and a severe risk of unreliable planning. Pedro’s efforts to document (1) organizational processes and resources, (2) the decision to deploy MaPPAR, (3) the MaPPAR process, (4) MaPPAR’s functionalities and upgrades, and (5) the IT structure of ENGINECOMP supported his conviction that there was no need to incur the high costs of switching to a different system, especially when the European economy, including its automotive sector, was slowing down. Pedro realized that prevailing with this view would require gaining allies within ENGINECOMP and gaining the explicit support of Fritz, a near impossibility. Pedro just did not know what else could be done to get people to see the importance of abandoning the small databases and in-house programming in favour of a fuller understanding and use of MaPPAR.
Social Construction of Information Technology Supporting Work 49
Lyytinen, K., Mathiassen, L., & Ropponen, J. (1998). Attention shaping and software risk: A categorical analysis of four classical risk management approaches. Information Systems Research, 9(3), 233-255 Norman, D.A. (2002). Emotion and design: Attractive things work better. Interactions, 9(4), 36-42. Parker, S., & Wall, T. (1998). Job and Work Design: Organizing Work to Promote WellBeing and Effectiveness. Thousand Oaks, CA: Sage Publications. Piccoli, G.., & Ives, B. (2003). Trust and the unintended effects of behavior control in virtual teams. MIS Quarterly, 27(3), 365-395. Ramos, I.M.P. (2000). Aplicações das Tecnologias de Informação que suportam as dimensões estrutural, social, política e simbólica do trabalho. PhD Dissertation. Departamento de Informática, Universidade do Minho, Guimarães, Portugal. Snizek, W.E. (1995). Virtual offices: Some neglected considerations. Communications of the ACM, 38, 15-17 Thatcher, J. B., & Perrewé, P. L. (2002). An empirical examination of individual traits as antecedents to computer anxiety and computer self-efficacy. MIS Quarterly, 26(4), 381-396.
Social Construction of Information Technology Supporting Work 51
APPENDIX B The Automotive Sector in Portugal
The automotive industry in Portugal generates, per year, more than 6.6 billion Euros, of which 4.1 billion are in automobile components. It currently employs more than 45,000 workers. Investment in the automotive component industry continues to attract a large number of investors and is strongly supported by both Portuguese Government and European Union funds. The main areas of automotive production in Portugal include electronics, die castings, plastic parts, seats, and climate control systems. Manufacturers, including Volkswagen, Mitsubishi, Opel, Toyota, and Citroen, assemble more than 240,000 cars per year in Portugal. Portugal and Spain together make up the third largest car producing region in Europe. More than 80% of the vehicles produced in Portugal are exported to other European countries. Portugal’s automotive component industry, comprising 160 companies, focuses on engines, engine components, moulds, tools, and other small parts. Number of Companies Directly employed staff Turnover (billion Euro) Exports (billion Euro)
160 37 500 4 112 2 642
Source: AFIA (2002) (http://www.afia-afia.pt/) Components Industry Evolution Years
Turnover 4 24
Source: AFIA (2002) (http://www.afia-afia.pt/)
ENGINECOMP: Organizational Units, Business Vision, & Mission, International Norms Adopted The company’s headquarters are in Brazil. The company has plants in Brazil, Portugal, and Argentina. It has commercial offices in Germany, the United States, Uruguay, and Ireland.
APPENDIX C ENGINECOMP Vision To be acknowledged worldwide as a competitive, high technology manufacturer that respects the environment. Mission To be the principal producer of the XYZ car engine component for the European market, aiming at complete client satisfaction; to achieve a high return on its invested capital. International Norms Adopted: QS 9000/ ISO 9001, VDA 6.1, BS 7750 Number of Employees Turnover (Million Euro per year) Exports (Million Euro per year)
CRM Systems in German Hospitals: Illustrations of Issues & Trends Mahesh S. Raisinghani, Texas Woman's University, USA E-Lin Tan, Purdue University, German International School of Management & Administration, Germany Jose Antonio Untama, Purdue University, German International School of Management & Administration, Germany Heidi Weiershaus, Purdue University, German International School of Management & Administration, Germany Thomas Levermann, Purdue University, German International School of Management & Administration, Germany Natalie Verdeflor, Purdue University, German International School of Management & Administration, Germany
German public hospitals face governmental and regulatory pressures to implement efficiency and effectiveness metrics, such as the classification of a Diagnosis Related Groups (DRG) system, by the year 2005. The current average patient stay of nine days in German hospitals is relatively high compared to France with 5.5 days and USA with 6.2 days. CRM will help increase customer satisfaction, loyalty and retention. Multiple
case studies, including one German hospital compared to two Dutch hospitals, as well as interviews with the management of two additional German hospitals, reveal that no hospital currently has an integrated CRM system. Rather, separate organizational functions collect and store quantitative and qualitative patient data. Furthermore, the challenges of data sharing and data security are significant barriers for technological changes in hospitals. This study focuses on CRM in a modern German hospital as it realigns its processes and strategies in order to focus on efficiency and customer satisfaction in a very competitive market.
The German healthcare industry is currently undergoing a major change due to the changing demographics of the German population and budget limitations. Due to the aging and/or retired German population, there is less money contributed to taxes. This is resulting in a substantial reduction in the allocation of funds to the healthcare sector. The current cost allocation system, that is, the generation contract, is put in question, and the public healthcare system is forced to charge more of its costs to its clients. The generation contract was the standard structure for the social security systems in many countries including Germany, where the following generation would provide funding for the previous generation. At present, deregulation, increased competition, cost pressures and price reduction from the private hospitals are forcing the public hospital sector to introduce efficient economic processes and systems. The introduction of the Diagnosis Related Group (DRG) calculation system by the German government requires hospitals to review their strategy in order to focus their communication on the patient of today and tomorrow. The DRG calculation system was initially a collaborative effort of insurance companies to establish a control system for payments for healthcare services provided. With the DRG calculation system, illnesses are categorized and acceptable treatments and standards, such as length of hospital stay, are determined. A fixed cost, or payment amount, is then assigned to each treatment or service. Insurance companies will only pay the specified amount for each service. The purpose of DRG is to provide complete patient care for a standardized disease pattern with a fixed budget. This system should also aid hospitals in meeting their budgets by reducing the length of a patient’s stay in the hospital, increasing productivity and using more cost-cutting technologies (Riedel, 2001). In these situations, hospitals need intelligent Customer Relationship Management (CRM) models that interface with the DRG system to help them acquire and “nurse” their customers — both domestic and foreign patients. The key motivator for CRM system implementation is the hospital administration’s realization that they have to be customer-oriented and cost-effective to survive the increased competition in the healthcare sector (MCC Health World, 2002). CRM is an approach that focuses on the acquisition, development and, most importantly, retention of customer relationships through the collection of data and the sharing of this customer information across all areas of an organization. It encompasses both software applications and business strategies that anticipate, interpret, and respond to the needs of current and prospective customers. Access to collected customer information by employees from all areas of an organization provides a complete
picture of the customer to everyone in the company and helps employees react to customer inquiries more efficiently. Handling customer requests with ease will increase customer satisfaction, resulting in customer loyalty and, ultimately, an increase in customer retention. Customer/patient-centric orientation is being adopted by contemporary businesses/hospitals as a necessary condition for competing effectively in today’s marketplace. Despite the steady growth in number of worldwide installations and sales, not all is perfect in the world of CRM applications. Industry studies suggest that approximately 60% of CRM software installations are failures. CRM applications are prone to problems associated with lack of application flexibility that allows for customized integration and updating, and data management as a function of scale (Crosby & Johnson, 2000; Juki et al., 2002).
Structure & Organization of this Chapter
This exploratory study is fundamentally based on E.M. Rogers’ theory of new product diffusion, also known as Diffusion of Innovation (DoI) (Rogers, 1983). The primary research objective of this study is to explore the diffusion and infusion of CRM systems in the hospital environment. Diffusion is defined as the extent of use of an innovation across people, projects, tasks or organizational units, while infusion is the extent to which an innovation’s features are used in a complete and sophisticated way (Fichman, 2001). In this chapter, we first discuss the trends and governmental/regulatory pressures that confront public hospitals in Germany. As these problems are very real and directly affect the general public, this topic is of keen interest and has been making headlines in Germany in recent years. Next we discuss the organizational structure of an old German hospital that forms the basis of the economic issues it now faces and the steps it should take in order to prevent a possible financial crisis in today’s fast-paced economy. Furthermore, we explain how the implementation of CRM can play a crucial role in bringing the German hospital up to speed with technology and the effective business models of today. Next, the case studies of five hospitals are presented along with a discussion of the objectives, benefits and costs of implementing a CRM system, the technical and strategic composition, the implementation phases and the pitfalls to avoid during the implementation phase. This is followed by the current challenges/problems facing the organization. The conclusion section summarizes the importance of a CRM system to steer the German hospital in the right direction, as well as outlines the potential of information technology in creating new business value for the hospital.
The healthcare institutions in Germany are divided into Akut-Kliniken (general hospitals), Reha-Kliniken (rehabilitation facilities) and nursing homes. Most of the German clinics are owned and administered by public authorities. These clinics are financed by the association of local authorities and the federal state. Moreover, some health insurance companies fund and even run hospitals (HPS Research, 2002). According to a study by Kienbaum Management Consultants (Amblank, 2003), German hospital administrators predict a further reduction of clinical beds, as well as nursing, adminis-
trative and non-medical staff. However, it is necessary to increase the number of doctors and information technology personnel in order to implement efficient and cost-effective business processes in German hospitals, especially in the field of general healthcare services. The survey conducted on more than a hundred Akut-Kliniken (general hospitals), with more than a hundred beds, found the following results (Amblank, 2002): •
Process Optimization: Hospitals need to focus on the optimization of a processoriented organization while taking into account the internal and external information flow. Clinic-specific processes between horizontal levels, particularly between the activity units and functions of a hospital, must also be considered. Hospital Cooperation: Hospitals should increase cooperation with other activity units and participants in the healthcare sector, especially with regard to communication. Managerial Tools for Hospitals: In view of the compulsory introduction of a Diagnosis Related Groups (DRG) cost calculation system, the implementation of process- and profit-oriented cost calculation systems for hospitals must be completed in 2004. At present, 76% of the hospitals are still using budgets as their controlling system. The implementation of DRG systems requires changes in hospital IT systems, training of personnel and data-protection measures — approximately 68% of the hospitals have already started to change their processes.
Key Market Statistics
In 2001, the hospital costs in Germany amounted to about EUR 60 billion, a figure that, according to the German Federal Statistical Office, corresponds to the domestic annual sales of the German automotive industry. The limited budgets of the federal states are forcing hospitals to implement efficient communication and information channels in order to reduce resources and to streamline their operations. The introduction of the DRG system should bring about greater transparency and economic efficiency, which the authorities hope will cause a considerable reduction in the time that patients stay in hospitals. In 2001, the average patient stay was of 9.9 days in German hospitals. In comparison, in Austria, the average patient stay is 5.9 days; in France, 5.5 days; and in the U.S., 6.2 days (Amblank, 2002). Table 1 lists the key statistics on German hospitals in 2000 and 2001. Furthermore, the introduction of the DRG system in Australia in 1992 comprised 667 groups and has continuously improved. As a result, patient stays have been reduced by 20% to 30%, and 20% of the released capacity has been abolished or reallocated to new innovative services (Higbie & Kerres, 2001). In general, a German mid-sized hospital (defined as 100 to 200 beds) settles over 20,000 cases per year, a figure that continues to grow. Consequently, there is a strong need for new, integrated IT systems to streamline all patient-related processes and to integrate, where possible, or even replace the systems currently used in hospitals (Stoffers, 2002). Moreover, IT systems must be flexible enough for improvements as the initial number of DRG classifications has already risen from about 600 to 881 in German hospitals (Riedel, 2001).
Table 1: Key Statistics on German Hospitals in 2000 & 2001 Type of hospitals Hospitals (total) General hospitals: Public Community hospitals Private Bed capacity General hospitals: Public Community hospitals Private Hospital stay General hospitals: Public Community hospitals Private Total number of cases General hospitals: Public Community hospitals Private Average hospital stay (days) General hospitals: Public Community hospitals Private
SETTING THE STAGE Hospital Management Strategy: Importance of Patients as Customers
In the past, the principal customer of the hospital was the physician. The patient was secondary, merely the physician’s customer. However, the trend has evolved to where the interaction has broadened, and patients communicate directly with the hospital, as well as with the physicians. The relationship is now triangular, with relations between the hospital and its customers, and between the physician and the patient. The hospital must therefore focus its efforts on satisfying a more demanding patient, who wants to see quality, cost and value in the products and services delivered by the hospital. Patients today are savvier; they do their own research for information, treatment options and sources where they can obtain the relevant services in the most optimal way.
The onus lies with the discerning hospital to take advantage of this fact and to stay ahead of their competitors by collecting, storing and disseminating this information to their key current or potential patients. It is important for the hospital to build a strong and longterm relationship with the patient and pursue referrals from the satisfied patient, which could attract potential new customers. In terms of information management systems and business models focused on efficient process systems, the healthcare industry is lagging behind other industries. Therefore, it is ever so critical that the healthcare industry review its strategy in order to survive. According to Cap Gemini, Ernst & Young (2002), all industries should adopt a business model that is customer-centric. A customer-centric business model is one in which a company proactively identifies, attracts, engages, serves and covets customers. In their study, the consultancy identifies seven universal elements of this paradigm as follows (Cap Gemini, Ernst & Young, 2002): • • • • • • •
profile, identify and connect with current and prospective customers; give customers a choice about how and when they interact with you; ensure access to the customer’s profile and any information that he or she requests with every interaction; ensure that new information is captured, disseminated and used effectively; develop mechanisms that minimize customer irritation, such as long hold times, multiple handoffs and unavailable or insufficient information; develop the capability to satisfy customers’ requests for insights and information at first contact; and treat customers as valued individuals by learning about their preferences, interests, concerns and wishes.
All of these elements form the building blocks of a CRM system. According to the German Hospital Institute, DKI (Deutsches Krankenhaus Institut), 70% of hospitals decided in favor of replacing of IT systems because this method allows them to precisely quantify costs (Stoffers, 2002). In addition, hospital management strategy should consider a comprehensive, efficient hospital information system, which supports the shift of focus to the patients. Respectively, doctors and hospital staff should be trained to lead patient-oriented discussions in order to provide optimal healthcare service. The reformation of the German healthcare system calls for the participation of the German citizens, health insurance companies and patients. Therefore, all parties involved should have easy access to healthcare information. Independent institutions for healthcare services, call centers of insurance companies, Web-based information services, hospital information services, as well as other networks and institutions, are the main communication channels for distributing this information (Deutsche Krankenhausgesellschaft, 2003).
Work Flows & Cost Drivers in Hospitals
A hospital information communication system should also provide information to the stakeholders and patient care centers, as shown in Figure 1. The information elements flowing across the various entities are the patient’s medical records, insurance informa-
tion, DRG information, government regulations, hospital policy and procedure, and so forth. The main cost drivers in the hospital are in the areas of accounting, patient administration, procurement and human resource management. For instance, personnel costs account for 70% of overall costs in a general hospital. With the introduction of DRG calculations, the main cost driver for hospitals has changed from bed turnover to treatments per case (Higbie & Kerres, 2001). Research has shown that the average hospital loses between 2% and 4% of its revenues because of poor revenue-cycle management resulting from a lack of system integration (Cap Gemini, Ernst & Young 2002). There are CRM systems available in the market today, which can merge these areas of the hospital into end-to-end processes and integrate them into a seamless network. These CRM systems increase the transparency of cost and resource allocation within the hospital. This increased transparency can be a great aid to hospital management in the reengineering of the fundamental hospital structure, with the objective of minimizing costs, while remaining within the guidelines of DRG. A major cost reduction area that the CRM system has handled effectively is the mapping of documentation during a patient’s treatment, from the moment he enters the hospital until the day he leaves. This information is important in order to have a full picture of the situation, even though DRG is only applied on the day that the patient is Figure 1: Stakeholders & Information Communication Technology Funding Institutions Health Insurance Companies
discharged (SAP, 2003). IT systems must support the improvement of information flow between patients and physicians, as well as work and decision processes as illustrated in Figure 2. In the past, some hospitals ignored their relations to their public environment, employees and patients. Today, hospitals are beginning to realize the importance of patients as a source of income. Thus, hospitals are developing marketing instruments in order to compete with other hospitals for the future patient. However, the communication methodology must comply with the German regulations for clinics, which forbid the simple promotion of clinical operations. Moreover, patient data is not allowed to be published, as this would violate the Bundesdatenschutzgesetz — the federal data protection/privacy law. The installation of CRM systems enables the introduction of patient-oriented marketing measures that allow patients to give feedback to hospitals in order to improve treatment (Torex-Deutschland, 2003). Furthermore, hospitals should not forget that healthcare is a field absolutely rife with emotion because a person’s health, and even life, is being addressed. Therefore, an accurate and reliable communication system must be implemented (Barnes, 2003).
Role of Information Technology (IT) in Hospitals
In IT solutions, simulation models that represent organizational processes in hospitals are used to optimize work processes. There are two types of simulation models with different strengths and weaknesses. In the first model, the underlying structure aligns the hospital’s organizational processes with the physical layout of the hospital. Patients are viewed as mobile units with assigned plans who move from one location, such as a ward or treatment room, to another, receiving the respective treatment plan at any given location. The treatment facilities are regarded as functional units where specific services are carried out. The physicians and medical staff are considered passive resources in the model. They are simply there to perform medical services. The doctors and medical staff are guided by the established procedures and local assignments in their administrative decision making. The limitation of this model is that human behavior of patients, as well as medical staff, is not taken into consideration as much as it should be. In this model, the prescribed treatment plans are standardized. There is no room to allow the medical staff to adjust treatment according to individual case, such as a set back or complication. Furthermore, the failure to consider the human element in the hospital’s systems and procedures also makes it difficult to accurately forecast and measure performance. The second simulation model, the agent-based simulation system, is able to support the economic and organizational decision-making process in the hospital domain. Such a system is developed in a two-step process. In the first step, a specific application context is modeled after a real situation in a selected hospital. The basic requirements posed on the simulation model within the hospital domain are analyzed. Based on the information gathered, one attempts to generalize the concepts concluded from this study. In the second step, a general-purpose, component-oriented, agent-based simulation system is developed. It can be used as a general modeling framework in the hospital domain (Sibbel & Urban, 2001).
The technology component of a CRM strategy is vital to its success. Areas requiring integration in CRM are cross-channels, back to front office and operations to analysis. Therefore, the IT architecture and infrastructure must be reliable, well-designed and easy-to-maintain to meet the present and future needs of the organization. The Enterprise Solution Architecture Model (Cap Gemini, Ernst & Young, 2002) in Figure 3 illustrates how technology supports CRM. Health CRM applications have operational CRM (that includes analytical CRM) at the center of the model. Operational CRM deals with automation and streamlining workflow. Specific tasks of operational CRM include collecting data, processing transactions and controlling workflow. Some of the elements of operational CRM are described: • • • •
Portal Technology: Enables customers to connect with a company over the Internet. Workflow/Workload Management Rules Engines: Are instructions set forth by managers that enable computer applications to make decisions. Case Management/Accountability Functionality: Accounting system used for tracking patient inquiries, from initiation to resolution. Knowledge Management: Deals with software and applications that include analytical tools and databases, and is generally used to solve business problems.
Analytical CRM is sometimes treated as a subset of operational CRM. However, some healthcare providers prefer to use it as a stand-alone application in order to take
Figure 3: The Enterprise Solution Architecture Model
advantage of its data analysis and management capabilities. Both CRM applications utilize OLAP (online analytical processing) technologies and other tools that aid decision makers in the development and revision of plans. The next component of the Enterprise Solution Architecture Model is the Connected Health Business Model. This component comprises functional tools such as strategy alignment, solution design and change management. Finally, the last component of this model is the Health Integration Architecture. The main purpose of this component is to streamline and link the technologies and processes efficiently and effectively (Cap Gemini, Ernst & Young, 2002).
Rationale for a CRM Solution
Poor revenue-cycle management is one of the major costs of a hospital. CRM can redeem this cost with its enterprise-wide platform of information sharing, customer database, contact management, and system to coordinate and identify payers with external resources. A CRM solution can add business value to the hospital in the following areas (Cap Gemini, Ernst & Young, 2002; L&T Infotech, 2003): • • • • • • •
Reducing transaction costs for processes involving the customer Improving patient/physician satisfaction and loyalty Optimizing revenue potential by reducing “missed appointments” and via improved care-plan compliance Reducing claim refusals by insurance companies Lowering accounts receivable balances Differentiating itself by offering a better service experience to customers (i.e., patients and physicians) Creating a complete 360-degree view of the customer
Implementing a CRM system gives the hospital the capability to integrate its internal processes of marketing, sales, services, analytics, interaction center, field applications, e-commerce and channel management, in order to deliver a people-centric solution leveraged on existing technology and to lower overall costs (SAP, 2003). Since CRM is an approach towards dealing with customers, it is critical for companies to rethink and redesign their basic business processes and organizational structures in order to succeed. Previously, healthcare providers took an episodic approach to their interaction with patients and physicians, where costs and benefits were assessed per episode of care. Today, healthcare is treated as an ongoing mutual value, where relationships between patients and physicians develop over time. Another consideration before CRM implementation is performance measurement. Performance measures around CRM can be qualitative (e.g., customer satisfaction) as well as quantitative (e.g., percentage of inquiries handled per day). The study prepared by Cap Gemini Ernst & Young identifies efficiency (e.g., developing more customer selfservice capabilities, such as scheduling of follow-up appointments, prescription refills, pre-registration, class registrations and concierge service requests), service effectiveness, and market/revenue growth as three objectives on which healthcare providers should focus and are described below (Cap Gemini & Ernst Young, 2002). For instance, a hospital can conduct a patient-focused campaign, with the goal of increasing the
number of interactions with patients by cross-selling healthcare services or using current patients as referral sources to their friends and family.
Rationale for CRM: Identification of Organizational Change
In the past decade, hospitals generally evolved through the following three types of organizational focus (Lorenzi & Riley, 1995): i. ii.
Functional-oriented organizational focus — organized around the various healthcare and administrative functions in the hospital as illustrated in Figure 4. Specialism-oriented organizational focus — organized around the various specialty areas in the hospital as illustrated in Figure 5.
Patient-oriented organizational focus — organized around the patient needs as illustrated in Figure 6.
At present, hospital organizations are categorized into too many different functions and specialty areas. There is a lack of inter-connectedness and communication among the general business functions, specialty areas, administration, finance and customer care. Hence the functional and specialism-oriented organization structures may not be optimal from a business process standpoint. In the patient-oriented organization, processes are categorized by flows of patients, with multi-disciplinary teams and management integrated into a seamless network. The ultimate vision of the CRM system is customer service transformation of the hospital-healthcare organization, which inteFigure 6: Patient-Oriented Organization Focus
Source: Sibbel and Urban (2001)
Table 2: Comparison of the Old Model & Future-State Vision of a Hospital The Old Model
“Future State” Vision
1. Communication is fragmented
2. Departments and services are siloed
3. Information systems are stand alone
Front-end and back-end integration of information systems
4. Customer access is fragmented
Customer access points are aligned
5. Web-function is limited
Interactive, personalized Web pages
6. Marketing is not well-targeted
Customer is segmented and marketing strategy is specific to each segment
7. Employee incentives are misaligned with corporate objectives
Rewards are based on measures of customer (patient) satisfaction
grates all patient-related data from the different functions (Cap Gemini, Ernst & Young, 2002). Table 2 illustrates the direction in which the hospital must move in order to effectively and efficiently integrate its processes, while keeping costs at a minimum.
In a hospital, there is a strong personnel-related type of performance process between medical staff and patients. The employees are the most important factor for a hospital’s success in improving the quality of its service and the satisfaction of its patients. A successful CRM-implementation team must consider the following: • • • • • •
Start with a pilot project that incorporates all the necessary departments and groups that gets projects rolling quickly but is small enough and flexible enough to allow tinkering along the way Design a scalable CRM architecture Confirm that if the company needs to expand the system, it has the capability and flexibility to know what data is to be collected and stored Recognize the individuality of customers and respond appropriately Ensure that everyone within the organization, especially the IT department, has a comprehensive understanding of the business strategies and customers’ needs Implement the CRM project across all departments and make the implementation easy for the customers, not the company
The main reason why the introduction of a CRM system in a hospital might fail is a lack of communication between everyone in the customer relationship chain, leading to an incomplete picture of the customer. Poor communication can also lead to technology being implemented without proper support or buy-in from users. Any change made must involve and come from the people (SRD Group, 2003).
Rationale for CRM: Identification of Technological Potential
regardless of their location. This would allow them to integrate their results in real-time, thereby saving the critical hours that may mean life or death for their patient.
Processes Involved in the Implementation
When implementing a CRM system, a hospital should focus its activities on following three different process areas: • • •
Patient-oriented information à Patient database Hospital facilities marketing à Quality management Development of preventive actions à Information policy
Hospital management should concentrate on the patient in the following perspectives during the establishment of a CRM system (L&T Infotech, 2003): • • • • •
Design and administer health policies for corporate clients Process medical claims (insurance/hospital claims, domiciliary claims) Liaise with the insurance companies for claims assessment and payment Facilitate the services of the network hospitals Provide information regarding medical facilities and advise on occupational health
Figure 7 illustrates the different objectives of a CRM system in terms of processes and information flow to patients and physicians and the customer “touchpoints” or Figure 7: Integrated CRM for Customer- and Physician-Related Processes
channels at the top, where contact with patients begins. Some examples of these customer channels are e-mail, telephone, fax, Internet, mail and walk-ins. Customer Interaction Management, the next component of the model, is basically a desktop platform that allows healthcare providers to manage interaction with patients through all the various “touchpoints.” Enabling technologies are the technical devices that arrange and distribute the information. Some examples of enabling technologies are interactive voice response (IVR), computer-telephone integration (CTI), personal digital assistance (PDA) and Web application protocol (WAP). The establishment of a Web-based, information platform for instance, could facilitate the integration of the different inquiries for patients and physicians. Continuous integration feedback facilitates the development of a more customer-oriented hospital organization.
The current healthcare reform in progress in Germany is looking at the financial viability of German hospitals that have to become more competitive in order to survive in the long run. Thus, many private hospitals decline providing any internal financial data as they do not want to inform their current and future potential customers. Five case studies, including one German hospital compared to two Dutch hospitals, as well as interviews with the management of two additional German hospitals, are used to compare and contrast the separate organizational functions that collect and store quantitative and qualitative patient data. The five case studies were chosen since the intent was to explore the breadth of awareness of CRM systems solutions across enterprises (i.e., diffusion), and the depth of penetration of CRM implementations within organizations (i.e., infusion). The first case study is based on the secondary research available on ICT technology in one German and two Dutch hospitals. The second and third case studies are based on face-to-face interviews with Dr. Schulz, IT Administration Officer of Medizinische Hochschule Hannover (a public hospital); and Dr. Herr Kohnert, managing director of the International Neuroscience Research Institute (a private hospital), Hannover. Furthermore, the challenges of data sharing and data security as significant barriers for technological changes in hospitals are also discussed. The key focus is on CRM in a modern German hospital as it realigns its processes and strategies in order to focus on efficiency and customer satisfaction in a very competitive market.
these aforesaid problems. The ICT project was controlled by a central team comprising members from IMISE (Institut fuer Medizinische Informatik, Statistik und Epidemiologie), ZMAI (Zentrum fuer Medizinische und Administrative Informationssysteme), and the administrative and ICT departments. The Leipzig University Hospital evolved from a functional-oriented hospital to a specialism-oriented one (see Figures 3 and 4). Even in this early stage, the main goals of the Leipzig University Hospital centered around the patient (customer) database. At the end of the project, the final design of the network organization that resulted was not able to fully integrate all its information system. The system was based on bilateral agreements and uni-directional data flow (low reach and low range). Some of the problems facing the project were very similar to those commonly faced in the implementation of a CRM system, such as lack of a clear hospital strategy and foresight for future development, for example, to reach beyond organizational boundaries in seamless network projects; lack of clear measures for performance and integration; and/or lack of technical support and expertise since implementation and central computing facilities were mostly insourced. On the other hand, the ICT infrastructure implemented in the Leipzig University Hospital was based on current technology that has the scalability to accommodate future networking projects. This scalability is indeed advantageous as the hospital moves on to the patient-oriented organizational focus that may help the hospital administrators and staff to look into more advanced CRM-solutions. As the trend for implementing CRM systems in German hospitals is relatively new, we reviewed the relevant information on Information and Communication Technology (ICT) for the Leipzig University Hospital and two Dutch hospitals to get a sense of the European contextual perspective (Spanjers, Hasselbring, Peterson, & Smits, 2001; Sibbel & Urban, 2001). Table 3 compares and contrasts the ICT characteristics in one German and two Dutch hospitals.
Table 3: Information Communication Technology Characteristics in One German and Two Dutch Hospitals Information and Communication Technology Characteristics Level of network organization Organizational focus 1. Strategic drivers and incentives for networking
University Hospital Leipzig Academic Hospital (GERMANY) Intra-departmental towards interdepartmental Functional oriented towards specialism oriented Externally, marketing the hospital and attracting the ‘right’ patients
Internally, efficient and effective use of capacity and competence resulting in costreduction 2. Enabling conditions for networking
A clear hospital strategy is needed to align the strategic information management within.
3. Design of the network organization
There is only one clear seamless network project example in which ICT plays a dominant role. However it is based on bilateral agreements (two nodes) and unidirectional data flow that is not integrated with the other information systems (low reach and range).
4. Functioning of the network organization
Mainly the ICT department and the administrative department are involved in the organization and management of ICT. Technical support, implementation and central computing facilities are mostly insourced. The budget for innovation has been large.
Bosch Medicentrum General Hospital (The NETHERLANDS) Inter-departmental
Roessingh Rheuma Categorical Hospital (The NETHERLANDS) Inter-organizational
Externally, to reduce costs
External, improve efficiency and effectiveness of rheumatology services, meet patients’ demands, through the exploitation of ICT Internally, to formalize effective lines of communication and develop expertise
Internally, to improve efficiency and effectiveness in control resulting in reduced costs without loss of quality in hospital healthcare Efficient and effective control of hospital organizations depends on ability to determine the relation between input and output. These changes demand a higher flexibility of ICT. In seeking improved efficiency and effectiveness, reorganization took place to evolve from an intra-departmental to a inter-departmental organization structure. Units (9 care units, 8 supporting care units and 2 service units) were introduced to give the middle management the flexibility needed (moderate reach and range). Hospital middle management, functional operators and physicians and the automation coordinator are involved in the organization and management of ICT. Technical support, implementation and central computing facilities are outsourced. The budget for innovation is small.
The demand and supply mechanisms regarding rheumatology knowledge and ICT knowledge across the network; sharing of costs (i.e., technical infrastructure) and risks (i.e., privacy) Separate responsibilities for rheumatology services and ICT services; different functional roles and levels: ‘sponsor’, ‘network cocoordinator’, ‘participants/users’; enabling role of multimedia network technology (high reach and range)
Network and stakeholder management; provision of telerheumatology services across the network; leveraging of rheumatology expertise across the network; centralized (concentrated) ICT infrastructure and dispersed ICT applications; differentiated demand and supply of multimedia network technology
Table 3: Information Communication Technology Characteristics in One German and Two Dutch Hospitals (cont.) 5. ICT infrastructure of the network organization
The implementation of a new hospital information (SAP R3) system, based on new technology, makes future networking possible.
To support the specific hospital processes, mainly HISCOM (market leader) information systems are used.
6. Performance of the network organization
The lack of a clear hospital strategy provides no triggers to reach beyond organizational boundaries in transmural care projects. There is no clear measure for performance.
Reduced costs without loss of quality in hospital healthcare. In this way a contribution is made to the mission statement. The networking is starting to reach beyond organizational boundaries in transmural care projects.
A multi-media database -- ‘the post office’ -based on Internet technology is used to facilitate communication and diagnosis of rheumatology cases. Critical requirements are asynchronous multimedia communication. Improvement of interinstitutional collaboration and communication; efficiency and effectiveness; improvement of rheumatology services; stakeholder satisfaction, redefinition of stakeholder roles and (strategic) positioning in the sector
Source: Sibbel and Urban (2001)
Figure 8: Medizinische Hochschule Hannover’s Services STATION (normal, intensive)
CLINIC (surgical, non-surg.)
Universal Communication Basis
Central Computer System Registration, transfers, leave, diagnosis, therapy documentation, invoices (proKIM, Highdent Plus)
Communication server (DATAGate), User-server, Web-, File-, Exchange-, Mail-, Fax-, Archive-, Proxy-, Citrix-, Video-, External server, etc.
by MHH and the interface between them. The HIS is linked with a laboratory information system (LIS) enabling MHH to collect and record all patient-oriented data digitally. In 1999, the migration of hardware and software from a mainframe to client-server was completed. At present there are Internet, intranet, mail-servers, directory-list servers and backup servers installed. Data security is achieved by an internal control and tracking system of all user-registration in the hospital matched with personnel data files and a firewall that secures the backbone system. There is an emergency concept for ensuring system operations in the case of an outside attack. The integration of the different functional data is very important for the introduction of DRGs in 2005. Until 2005, MHH plans to introduce a PAC-System that transfers diagnosis pictures and x-rays within the hospital via both internal networks and wireless local area networks. The exchange of data via mobile work places that are equipped with a laptop and linked over a wireless network with a transfer rate of up to 2 MBit/sec will be extended over the entire hospital. In addition, the AIRONET/Cisco APA 4500 and Artem Compoint permit a codified data transfer at a transfer rate of 11 MBit/sec. A new multi-functional identification card is under discussion to improve access control to internal patient data. The establishment of a digital, comprehensive and integrated CRM-system with its focus on patients is on the medium to long-term horizon, as public hospitals such as the MHH optimize their internal information flow processes.
International Neuroscience Research Institute (INRI) (Categorical Private Hospital)
INRI provides a comprehensive range of diagnostic procedures and treatment options for diseases and disorders of the nervous system. At the INI a team of internationally renowned specialists provide the full range of modern neuromedical techniques with notable diagnostic and therapeutic neurosurgery, neuroradiotherapy and neuroradiology. A special emphasis of the INI is the interdisciplinary, surgical and interventional radiological treatment of vascular malformations and tumors of the nervous system. The INI also offers highly specialized surgery, including brainstem implants for the treatment of hearing loss and deep brain stimulation for the treatment of Parkinson’s disease. The INRI private hospital uses a patient-oriented strategy but does not have an integrated CRM system in place yet. At present, INRI’s quality management department is collecting patient-feedback (questionnaires) in order to optimize internal processes and the treatment of patients. INRI uses scientific opinion leaders and physicians to get access to its potential patients. Its information communication technology comprises an HIS and an ERP system provided by ISHMed and SAP. The ERP system collects the quantified data, and HIS registers all qualitative data related to the patient. In the long run, INRI plans to establish an integrated CRM system. Table 4 compares and contrasts the information and communications technology (ICT) between INRI (a private German hospital) and MHH (a general/public German hospital). The INRI has a patient-oriented organization and lower operating costs in terms of personnel than the MHH. The investment in a CRM system is postponed because INRI is still screening the market for a standardized software solution.
ICT Characteristics CRM-systems Patient focus strategy Patient feedback Communication
Hospital information systems (HIS)
Private Hospital International Neuroscience Research Institute No CRM-system Yes Frequent patient surveys • Scientists • Recommendations • Special press • Online portal • Patient data • Data warehouses • JIT data provision • Data integration via inhouse networks, ISDN, inter- and intranet • Data management software • ISHmed • SAP R3 • Establish an integrated CRM system
General Hospital Medizinische Hochschule Hannover No CRM-system Under survey Quality management • Press center • Research • Online portal • • • • • • • •
Patient data Data warehouses JIT data provision Data integration via inhouse networks, ISDN, inter- and intranet Data management software ProKIM SAP R3 Integration of quality management and HIS
CURRENT CHALLENGES FACING THE ORGANIZATION
Both Gartner Dataquest and Giga Information Group estimate that failure rates of CRM implementation projects remain at approximately 60% due to unclear organizational goals and a lack of success metrics, made worse by the complexity of enterprise CRM initiatives. In light of the above case studies, a CRM application that is appropriate to the organization in terms of its functional breadth and depth is paramount to a successful CRM implementation. Next we discuss the management and technical issues related to CRM.
Management Issues Related to CRM
CRM is not only about the technology. It is more about organizational and cultural change demands from hospitals to conduct transformations to patient-oriented organizations. Many CRM initiatives fail because senior management lacks the involvement, vision or enthusiasm necessary to achieve the anticipated outcomes. Figure 9 illustrates the transformational change cycle that is applicable to the cases we have discussed in this study. Cultural changes require a transformational change cycle that involves four phases: 1.
Initial Phase: It is necessary to have a real focus on the result desired that includes business needs, objectives, strategy and planning; not on the technology. In the hospital, every member of the medical staff needs to understand that the purpose of this change is to increase the number of patients and to maintain the loyalty of the existing ones. Other purpose of this change is to reduce costs and to increase the ROI.
Uncertainty Phase: The employees need to know, understand, and have the enough time to discuss and accept the change. It is necessary to have good communication levels amongst them. It is important to know that in the healthcare business, the doctors are dealing with patients every working day, and thus, they have a good knowledge of customer relationship. However, they need to know that there are other ways to improve that relationship and to make it more efficient using CRM strategy. Breakthrough Phase: Training the staff in this new technology is a good approach, but it is also necessary to evaluate the feedback of this training. In the hospital, it is important that personal staff develop their abilities to manage this new system and to apply it in their normal work. Competence Phase: This phase is normally missed and not recognized as important. The impact of this phase deals with the acceptance. In a hospital, it is not easy to measure the acceptance level of the employees and the customers of this new technology, but it is basic guideline to estimate it with the ROI (SRD Group, 2003).
For healthcare provider services, the customers tend to be much more forgiving, as they will put up with a great deal more than they will in normal commercial situations. Some healthcare professionals may resist the very notion of measuring customer satisfaction. It is not easy to have quick feedback of how a CRM business strategy is doing, but it can be measured using the following parameters: • • • •
Reduced reporting and/or sales cycle Reduced expenses/cost of doing business Improved external/internal customer satisfaction Increased sales and productivity
Technical Issues Related to CRM
The ability to integrate the operational and analytic CRM/ the front- and back-end applications, the length of time required for deployment, scalability, ease of maintenance and the potential to upgrade are important technical issues for CRM products. Based on the above case studies, the key challenge areas with respect to technical CRM are patient access and integrated call centers. Patient Access Since many customers have access to online information of various healthcare providers, the information needs to be interactive. One of the CRM solutions is to give the customers the option to customize the Web site based on their specific needs. The hospital can interact with the customers easily via personalized e-mails and provide prompt feedback. On those Web pages, the end-users such as patients and hospital administrators/staff/physicians can find a wealth of comprehensive information about the hospital staff, appointment schedules, registration for healthcare programs and library resources. Integrated Call Centers Call centers have an important role for patients, their families and physicians, who want a unique point of access to the hospital organization. Normally, the call centers are structured for specific functions and there is little or no interaction with most departments of the hospital. They are also set up to manage call processing, instead of relationship building, and are based on elementary call-routing or automated calldistributor systems, with some basic voice-recognition capability. Now, most healthcare organizations have changed their view of call centers — they are arguably the best way the hospital can improve their customer service and customer relationships and in turn, profits (Cap Gemini Ernst & Young, 2002). Thus these call centers will increasingly be turned into “contact centers” that handle e-mail and Web interaction, as well as inbound and outbound telemarketing. In summary, the German hospital is now facing many governmental and regulatory pressures as it enters the new technology era. It needs to review its fundamental structure in order to increase its hold on its key asset — the customer/patient. It has to realign its processes and strategies to focus on efficiency and customer satisfaction. In the midst
of a highly competitive market, in order to remain in the business, it should not only seek to compete against its rivals, but it should look into cooperating and collaborating with the other healthcare providers. It is essential to share information within the network of the hospital and if possible, to extend to external partners so that it can provide the customers with an optimal service. The goal must be to build a lifetime relationship with the patients and to gain new, domestic and international customers. In this respect, CRM systems and other existing IT solutions that facilitate information sharing and networking are key to the success of a modern German hospital.
SAP. (2003). SAP® for Healthcare. Retrieved September 18, 2003: http://www.sap.com/ company/press/factsheets/industry/healthcare.asp Schmidt, H. (2002) Gesundheitsreport. HPS Research, July. Sibbel, R., & Urban, C. (2001). Agent-Based Modeling and Simulation for Hospital Management. Cooperative Agents. Kluwer Academic Publishers (pp. 1-18). Spanjers, R., Hasselbring, W., Peterson, R., & Smits, M. (2001). Exploring ICT-enabled networking in hospital organizations. Proceedings of the 34th Hawaii International Conference on System Sciences, (pp. 1-10). SRD Group Auckland. (2003). CRM and Cultural Change: It’s about People, Process and Technology. Retrieved September 18, 2003: http://www.srd-grp.com/crm.php/ crm-cultural-change Statistisches Bundesamt (2003). Eckdaten der Krankenhausstatistik 2001. Stoffers, C. (2002). A Healthy Economy for Hospitals. Retrieved September 18, 2003: http://www.sapinfo.net/public/en/search.php4/start/t/hospitals/cat Zünkeler, M. (2003). Marketing für Kliniken—Das Krankenhaus als regionale Gesundheitszentrale. Torex Deutschland.
Enterprise System Implementation in the NZ Health Board Maha Shakir, Zayed University, UAE Dennis Viehland, Massey University, New Zealand
The Health Board is one of the largest public health care providers in New Zealand (NZ). In early 1999, a supply chain optimization review recommended an enterprise system (ES) implementation to provide better control and reporting of organizational finances. The focus of this case is the IT platform decision made in conjunction with the ES implementation process. This decision was thoroughly considered by all Health Board stakeholders and the final choice was made in alignment with the Board’s strategic IT policy. Nevertheless, initial testing two months prior to go-live revealed major performance problems with the new system. The case documents the events that led up to the selection of the original IT platform and the challenges the project team faced in deciding what to do when the platform did not meet contractual specifications. 1
The Health Board is a non-profit public organization that is one of New Zealand’s (NZ) largest providers of public hospital and health services. The Board has approximately two million patient contacts annually and provides regional services for 30% of NZ’s population. The organization is structured around seven business units that include four specialist teaching hospitals and other facilities offering community health services, mental health services, and clinical support services. The Health Board vision focuses on patients’ needs. Being a non-profit organization, surplus funds are allocated to supporting patients, research, and education. Table 1 provides the organization’s profile. Health funding in NZ is disseminated through 21 district health boards (DHBs). Each DHB is responsible for improving, promoting, and protecting the health of the population it serves. For their catchment area, each DHB is delegated the responsibility for making decisions on the mix, the level, and the quality of the health services that are publicly funded. They are also responsible for entering into agreements with providers for health service delivery. DHB decisions are made on the basis of local needs, within national guidelines. Funding is based on the size and characteristics of the population of the district each DHB serves; however, a few nationally funded services still exist. The Health Board is one of three DHBs in the same region that share a vision to promote close cooperation for the provision of health services. The Board is made up of 11 members: seven elected and four appointed. All Board members report directly to the Minister of Health.
Table 1: Organization Profile Categories
Health Board Profile
The provision of public hospital and health services
Type of organization
Four specialist teaching hospitals and facilities offering community health services, mental health services, and clinical support services
Mission statement (1999-2000)
“The Health Board will provide New Zealand’s finest comprehensive health service through excellence and innovation in patient care, education, research, and technology” (Health Board Annual Report,1999-2000).
Patients (two million patient contacts annually)
Regional (within NZ)
8,500 employees $600 million budget for the year 2000/2001
In 1999, ConsultCo, a big-five consultancy firm, was engaged to assess the strengths and weaknesses of the supply chain management function at the Health Board, with a view to provide recommendations for the improvement of that function. The product of that engagement was a supply chain optimization (SCO) review report. The SCO review identified problems in business operations and suggested a combination of an organizational restructure, business process reengineering (BPR), and ES (ERP) implementation to accomplish the change program. The core financial modules of Oracle 10.7 ERP system had been implemented in 1997 and were operational at the time the SCO review was conducted. However, that implementation was heavily customized and could not provide for realizing the new strategic vision that aimed to “standardize, consolidate, and integrate services … and control finances” (Strategic Plan for the Health Board 2002-2007). In addition to the recommendation of the SCO review, in early 1999 the Health Board was informed that Oracle 10.7 financials was going to be de-supported by Oracle by the end of 2000, leading to the realization that a major application upgrade was urgently needed. As a result, and in partnership with ConsultCo, an ES business case was developed with a view to rectify these problems. The business case included eight key objectives that were linked to the Health Board’s strategic plan. These are summarized in Table 2.
Table 2: ES Project Objectives Objective
To achieve the savings identified in the Health Board strategic business plan
To account for savings through an appropriate standard costing mechanism within inventory.
To have reporting systems that enable management by exception and the control of rogue expenditure.
To implement procurement through a standard requisition process with a catalogue environment.
To implement processes for the delegation of authority and risk management of the procurement process.
To have a platform in place which: - positions the Health Board to enter into external shared services with other local health care providers - facilitates internal interconnectivity, which allows for the consolidation of accounts payable, inventory management, internal logistics, and enables external supply chain connectivity. To implement the “Health Board Way” throughout the supply chain process, with a particular focus on standardization of processes, integration of systems, and consolidation of service.
To act as a catalyst for the change in business processes and work practices.
Note: Adapted from the Health Board ERP System Business Case (June 2000, p. 25)
Despite the problems the SCO review had identified with the Oracle 10.7 system, there was an agreement that the new implementation would still be an Oracle ES. The Health Board would have had to write-off the huge investment in the Oracle 10.7 application if it chose to change to a different vendor. Therefore, the business case for the new system was written with a focus on an Oracle upgrade and implementation that was financially justifiable. Organizational restructuring started by the end of 1999, with new job descriptions being written and advertised to fulfill the new organizational design. All new roles had a focus on system implementation experience in preparation for a re-implementation of ERP applications to support the change program. Table 3 presents a chronology of ES implementation events. The final business case the Board considered in July 2000 compared two upgrade alternatives. These were an upgrade from Oracle 10.7 to either Oracle 11 or Oracle 11i ERP applications. While Oracle 11 was in operation since 1999, Oracle 11i was a new release that was launched in NZ in June 2000. The Health Board chose the upgrade to the Webenabled Oracle 11i application to avoid the need to undergo a further upgrade a short time later. A profile of the ES implementation project is included in Table 4.
It is October 2000. James Keen, the chief financial officer (CFO) of the Health Board and the business sponsor of the ES project, is faced with a difficult decision. The Table 3: Main ES Implementation Events (1997-2000) Date
- Implementation of the heavily customized financial modules of Oracle 10.7 ES.
- Oracle users were informed that the Oracle 10.7 ES would be desupported by the end of 2000. An upgrade was suggested to address the loss of future support.
- The newly appointed CFO recruited a BPR Manager to project manage and review both the supply chain and the finance functions in partnership with ConsultCo, a big-five consultancy firm. The output of that partnership was the supply chain optimization (SCO) review.
End of 1999
- The ES business case was developed to resolve the majority of the SCO review recommendations, including a major system upgrade, with the CFO being the ES project sponsor.
- In conjunction with the initiation of the ES project, new organizational roles were established, advertised, and filled by March 2000. All new recruits received training on the Oracle 10.7 applications.
- A request for proposals for implementation consultancy services was issued. Bids were received and evaluated, with the winning bid going to ConsultCo.
- The new version of the Internet-enabled Oracle 11i application was released.
- The ES business case was submitted to the Board and approved.
- The ES implementation project started, including core financials, fixed assets, and procurement modules.
Financials (upgrade), fixed assets (new implementation), and procurement (new implementation)
Number of users
8,500 users, including 120 power users
Cost of implementation in dollars
Approximately NZ$2.3 million that included NZ$1.7 million for hardware, software, consultancy, and internal costs; plus NZ$650,000 for operational costs, including backfill and change management.
Number of locations
One instance implementation on multiple sites (seven business units on two geographically distributed sites).
Third-party implementer: ConsultCo, a big-five consultancy firm.
implementation of the Oracle 11i ERP system is scheduled to go live in mid-December. However, initial testing shows that there are some key performance problems with the system. In a meeting with the project team earlier that day, James was told that software testing on PCs that use the Windows NT platform showed substantial delays in data processing. Even worse, the tests were carried out using mockup data and the expectation was that these would be fairly manageable by the system. James remembers that the IT platform issue was one of the issues the ES project team had spent considerable time on during the evaluation phase. The IT platform is the foundation for all business applications; hence it is key to any successful IS implementation. As shown in Figure 1, the base of the IT platform is the hardware (HW) and operating system (OS) layer. Although the components in this base layer are largely commodities and are readily available in the marketplace (Broadbent & Weill, 1997), the
Figure 1: Business Applications & the IT Platform Inter and intra organization applications (i.e., CRM, SCM, electronic commerce)
hardware and software architecture form the basis for the IT capability and functionality of the firm (Meyers & Oberndorf, 2001). When purchasing any new, large application the organization must consider a number of criteria for a suitable IT platform. One obvious factor is the vendor’s choice of platforms. For example, if a Linux-based version of the application is unavailable, then Linux is not an option. A second factor is the cost of the operating system and the hardware. For example, initial investment in Windows is generally considered to be a high-cost option, while Unix and Linux cost less (NetNation Communications, 2003). However, organizations must also look beyond acquisition costs to total cost of ownership (TCO), which also includes operations and control costs. TCO can be as much as 100% more than hardware acquisition costs (David, Schuff, & Louis, 2002). A third factor is any hardware/software standard configuration policy in place, usually to solve operational problems (McNurlin & Sprague, 2002). Because of existing staff expertise, the need to integrate applications across a uniform platform, or attempts to reduce TCO, an IT department may prefer or require a standard IT platform for all applications. Other factors such as ease-of-use, portability, processing capability, track record, reliability, and scalability also influence the IT platform choice. See Table 5 for a more detailed comparison of the Windows NT and Unix operating systems platforms. These general factors apply to enterprise systems implementations. TCO is a critically important component in determining the business value of an ERP initiative (Meta Group, 2000). Additionally, a new ES implementation or upgrade requires knowledge and expertise in areas of software functionality, systems configuration and integration and other technical aspects of the IT platform (Ng, 2001). Other factors that are part of an IT platform decision for ES implementation include vendor customer Table 5: Unix vs. Windows NT Platform features
Unix has advanced server and user management administrative functions. For example, Unix has a disk space allocation utility that can control disk space for any user.
A disk storage facility is not available with the NT platform.
Most Unix applications are free to use.
Most Microsoft’s applications are proprietary; therefore companies pay for using them. Furthermore, compared with other hosting platforms, Windows often require more staff resources to maintain. Hence, the Windows total cost of ownership is relatively high.
Interface/Ease of use
Unix is text-based and uses a command line structure.
Windows uses a graphical user interface (GUI) and is the operating system of choice for many new users, with a reputation for ease of use and administration.
Unix is an open source platform; therefore, there is a wide variety of CGI scripts, PHP scripts, and MySQL applications that will work on nearly any Unix system. However, writing an application with a Shell script or Perl in a Unix environment needs a lot of programming experience. Also, not designed for Windows, many of these scripts will not work on a Windows platform. Unix is portable to numerous hardware platforms. However, different vendors of Unix have released different versions. As a result, an application loses its portability if it is not running on all versions.
The Windows platform is compatible with Microsoft applications, such as FrontPage, Access and MS SQL. It also offers the use of programming environments such as Active Server Pages (ASP), Visual Basic Scripts, MS Index Server and ColdFusion. These serverscripting technologies are now becoming more popular because they are easy to use.
Unix is a multi-user, multitasking operating system that is text-based. As a result it can dedicate the full power of the server to applications. Hence, its powerful multiprocessing capabilities are still unparalleled.
The Windows NT platform is a multi-user and multitasking operating system.
Track record, reliability, and scalability
Unix has been in a state of constant refinement since its inception 30 years ago. The platform has a proven track record of performance, stability, and security. Furthermore, Unix can be used over networks that range in size from small servers to supercomputers.
Windows NT was a relatively new platform. Currently, the Windows 2000 Server is the newer hosting platform that completely replaced Windows NT.
the size of the Health Board. Andrew, whose role was focused on sales and managing the client-Oracle relationship, left both options open for the Health Board project team to decide. The other party involved in the IT platform decision was ConsultCo, the big-five consultancy firm that was the ES implementation partner. Like many public organizations in NZ, the Health Board was embarking on a big ERP project, but with a considerably low implementation budget for the size of the organization. To support the fast track project, the Health Board contracted with ConsultCo to manage both the evaluation and implementation processes. In NZ, it is common practice that the client organization determines what the new IT platform should be. Most organizations in NZ are small and medium-sized enterprises (SME), especially when compared to organizations in North America or Europe. As a result, the resources allocated to these implementations are relatively small, even though the systems involved usually have the same amount of sophistication and complexity. As one means of cutting costs, the client organization generally takes more responsibility in implementation decisions. That was the case here — the Health Board was responsible for a large portion of the implementation risk and the IT platform was one of those risks. After considering the advice offered by the Board’s IT department, the Oracle Account Manager, and ConsultCo, the Health Board ES project team selected the Windows NT platform for the implementation of Oracle 11i ERP system. Knowing the risks involved, and to mitigate these risks, they put into the agreement with Vendor2 a condition to ensure that the new system performed according to specification. If performance was not acceptable, the legal agreement allowed for the contract to be terminated. Vendor2 accepted this condition and implementation began. Coming back into the present, James was very disturbed to learn that the performance tests during the past week had shown unacceptable delays in data processing. He realized that a revisit of the earlier evaluation decision was imminent. James knew that this could represent a major setback to the project. If this problem were to delay the golive date, even if only by a few months, then the whole project would collapse. Any delay, he thought, would require a huge boost in the implementation cost. Specialized ERP consultants were scarce and the ConsultCo consultants working on the Health Board project were being flown into NZ from the ConsultCo office in Australia every week. To consider extending the project, even for a few weeks, would mean a large increase in costs and the Health Board did not have a large contingency fund to cover this blowout. Furthermore, as part of new government regulations, the Health Board was to start implementing a new chart of accounts in December. Plans for implementing the new chart of accounts were embedded within the new ERP system, so statutory requirements, as well as cost considerations, were at risk. James, as the business sponsor of this project, knows he needs to move on this issue very quickly. Going over both the earlier considerations of the IT platform decision and the contractual obligations with the hardware vendor, he wondered: Could he recommend that the contract be terminated and go for the alternative path of the Unix platform? Because of the size of the contract, organization policy necessitated that such a decision needed to go to the Board members for approval. Other questions that needed a careful assessment were: What if the hardware vendor decided to go the litigation path? What
if the problems were not caused by the IT platform? What if the Board did not approve the change? James knows that a decision needs to be made and to be made very quickly. It is one of those times when not making a decision is going to jeopardize the fate of the project anyway. He picks up his phone and schedules an urgent meeting with his project team the next day.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
One of the main problems that affected the choice of both the OS and the hardware platform was the relationship between the release version of the ES application and the IT platform. The Health Board had chosen to implement the new release of Oracle 11i; yet, experience in implementing different combinations of OS and hardware platforms with the new release was very limited. In the following, Michael Field, the ERP Project Director/ BPR Manager), explains the implication this decision had on ES implementation: One of the important areas in terms of a large ERP implementation is around the maturity of the software and its relationship with the operating system. This had quite a large impact on the issues which we had to manage for our project. For us, one of the biggest things in this particular implementation was the relationship of the application to an operating system. That triggered a whole lot of things for us. What we did on the project might have been different to some other ERP applications because this issue looked like it was going to have an impact on us being able to deliver the whole project on time. When a lot of people do an implementation they bundle the whole implementation with no risk. They pass the risk onto the party implementer who supplies everything — the hardware, software, and implementation services. In our case we believed we didn’t want to do that so we structured the project in a certain way and that was to get a third party to help us do the implementation. We’d buy our software from someone else and we would get the hardware from somewhere else. This in terms of managing a project of this size proved too challenging. But for us, that was the only way we could afford to do this project. It [selection of the IT platform] had a major impact on the project. So … [it] went up through to the steering committee, even to the Board. … Even though we had used all the expertise from Oracle, all the expertise from the IT platform vendor in this case, plus ConsultCo’s collective expertise, so-called best around the world, the decision ended up in hindsight not the right one. But at least we made a decision. Although the initial IT platform decision was made at the project manager level, where Michael (ERP Project Director) and ConsultCo were key decision-makers, when performance deteriorated James (the CFO) stepped in in order to rescue the project. He says:
needs to be made and actioned quickly. Is it possible to work with Vendor2 to resolve these problems, assuming these were only teething problems? Or need a change of IT platform be actioned as soon as possible with implications of a cost overrun and a legal suit?
Alexander, B. (2004). Unix vs. Windows NT. India Web Developers. Retrieved January 30, 2004: http://www.indiawebdevelopers.com/technology/scripts/chapter1.asp Broadbent, M., & Weill, P. (1997). Management by maxim: How business and IT managers can create IT infrastructures. Sloan Management Review, 77-92. BroadSpire. (2003). Windows vs. Linux: Choosing the right hosting platform. Retrieved January 30, 2004: http://www.broadspire.com/solutions/express/shared/ linuxvswindows.html David, J.S., Schuff, D., & Louis, S. (2002). Managing your IT total cost of ownership. Communications of the ACM, 45(1), 101-106. Hawaii Ebuzz. (2000, October 27). OAUG conference offers users chance to ask Oracle. Hawaii Ventures Corporation. Retrieved July 2003: http://www.hawaii ventures.com/news10023.html Hirt, S.G., & Swanson, E.B. (1999). Adopting SAP at Siemens Power Corporation. Journal of Information Technology, 14, 243-251. McNurlin, B.C., & Sprague, R.H. (2002). Information Systems Management in Practice (5th ed.). Upper Saddle River, NJ: Prentice Hall. Meta Group. (2000). ERP platform-related analysis total cost of ownership study: A platform-related cost analysis of ERP applications on-going support costs in the mid-tier. Retrieved April 21, 2004: http://www.verio.co.uk/powerplatform/library/ erp_tco.pdf Meyers, B.C., & Oberndorf, P. (2001). Managing Software Acquisition: Open Systems and COTS Products. Boston: Addison-Wesley. NetNation Communications. (2003, October 6). Unix versus Windows. Retrieved January 30, 2004: http://www.netnation.com/products/unix_or_nt.cfm Ng, C.S.P. (2001). A decision framework for enterprise resource planning maintenance and upgrade: A client perspective. Journal of Software Maintenance and Evolution: Research and Practice, 13, 431-468. Oracle calls Gartner Group biased after consultant knocks operations. (2001, August 28). CFO. Retrieved July 31, 2003: http://www.cfo.com/article/1,5309, 4748%7C%7CA%7C134%7C6,00.html Songini, M.L. (2000, October 20). Oracle applications users look for more help on upgrades. Computerworld. Retrieved July 2003: http://archive.infoworld.com/articles/hn/xml/00/10/20/001020hnorapps.xml
All organization and personal names have been disguised.
Automotive Industry Information Systems: From Mass Production to Build-to-Order Mickey Howard, University of Bath, UK Philip Powell, University of Bath, UK Richard Vidgen, University of Bath, UK
Building cars to customer order has been the goal of volume vehicle manufacturers since the birth of mass production. Eliminating the vast stocks of unsold vehicles held in distribution parks around the world represents potential savings worth billions, yet the current supply chain resembles islands of control, driven by production push. Despite recent advances in information technology offering total visibility and realtime information flow, transforming an “old world” industry to adopt customer responsiveness and build-to-order represents a significant step change. This requires overcoming barriers both within and between supply partners and at all levels of the supply chain. Yet, what are these barriers really like and how can the industry overcome them?
Automotive manufacturing is a global industry producing 56 million new cars per year, and represents a significant proportion of gross domestic product in developed countries, for instance, 5% in the United Kingdom (Crain, 2002). Yet despite steady sales, the industry in Europe is facing a period of significant change, driven by poor profitability, excess finished stock and over-capacity. Current vehicle manufacturing and distribution represents an old-world industry struggling to come to terms with a digital economy, driven by increasingly price conscious, demanding customers who require vehicles built to individual specifications and delivered in short lead-times. Vehicle manufacturers can no longer rely on selling cars from existing stocks and are shifting their business models away from mass production toward mass customisation and build-toorder (BTO). The ‘double prize’ for manufacturers in achieving BTO is eliminating the vast car parks of unsold inventory, and reducing vehicle discounting by dealerships that can demand a premium price for vehicles tailored and delivered according to customer choice. However, this increases the importance of existing systems for efficient order execution and integrated information flow, where manufacturers’ IT infrastructure still reflects the hierarchical, function-orientated nature of communication in many corporations. The rise of BTO reflects the increasing dissatisfaction in the marketplace with the traditional vehicle production philosophy that typically builds the vehicle first before finding a customer. In Europe, manufacturers expect dealers to hold between 60 to 100 days of inventory that amounts to billions of dollars (ATKearney, 2003). Even in the USA where vehicles are usually sold from dealer stock, 74% of customers would rather wait and order the vehicle instead of buying one from the dealer lot that is incorrectly equipped (Business Wire, 2001). Customers are beginning to realise that they are paying for the waste in the automotive distribution system. Hence, many manufacturers are now exploring the possibilities of reducing order-to-delivery lead-time to the customer through their own initiatives: that is, BMW — ‘Customer Orientated Sales and Production Process’; Ford — ‘Order Fulfilment’; Renault — ‘Project Nouvelle Distribution’; and Volvo — ‘Distribution 90’.
Figure 1: Delay in the UK Customer Order Fulfilment Process (Holweg & Pil, 2001) Order scheduling 14 days Production sequence 6 days Vehicle production 1 day Order bank 10 days
Order entry 4 days
Distribution 4 days
Vehicle loading 1 day
relationships and technology. It involves universities and independent research institutions to examine the role of BTO and the barriers to change across the automotive supply chain in the UK. The programme is unique as it encourages the participation of sponsors from all parts of the supply chain and some beyond, including vehicle manufacturers (VMs), dealers, component suppliers, logistics, consumer groups, trade associations, and financial corporations. The key objective of 3DC is to develop a framework in which a vehicle can be built and delivered to customer specification in minimal lead-times, with a three-day order-to-delivery as the ultimate goal. The generic map of the current order fulfilment process presents the extent of the problem to order, build and deliver a new vehicle within a short lead-time (Figure 2). It uses process mapping to record information and physical flows during the order to delivery process. In response to the productivity gap between Japan and the West — highlighted by the best-seller, The Machine that Changed the World (Womack, Jones, & Rods, 1990) — the past two decades have seen vehicle manufacturers optimising their own production operations while transferring more responsibility to upstream and downstream partners. Figure 2 highlights the challenge for the industry today, where competitiveness no longer depends solely on assembly plant performance and ‘metal bashing’, but on the collaboration of all stakeholders across the entire vehicle delivery process — from extraction of raw material to final inspection at the dealership. Identification of the barriers to change is essential if they are to be incorporated into planning for the redesign of the industry (Figure 3). This will promote a successful transition from the current mindset of ‘production push’ and the erosion of profits through discounted sales, towards responsive production and ‘customer pull’. The following accounts explore the experiences of key industry stakeholders, and highlight the major difficulties both in identification and amelioration of information barriers.
The lack of integration between Dealer Management Systems (DMS) and Dealer Communication Systems (DCS) is causing high levels of hand-keying and information duplication. Dealers operate two distinctly separate systems: DCS is linked with the VM and provides information on vehicle availability, price, incentive and orders. DMS provides the dealer with their own independent database of customer details, costs, and sales. When an order for a new vehicle is placed, significant levels of duplication occurs, where identical data such as vehicle description and owner details are hand-keyed into both systems. Ideally, dealers can do without the complexity and delay caused by maintaining two systems. Other stand-alone PCs are also used to support activities such as finance schedules. Hence, in terms of changing the entire vehicle delivery process from a manual to an electronic system, much development is required, as there are still up to 20 ‘hard copy’ documents per vehicle. For example, this research found the process to require an Order Form, Cash Back Claim Form, Vehicle Invoice, Supplementary Invoice, Vehicle Registration Certificate, Vehicle Swap Sheet, Vehicle Delivery Note, Purchase Invoice, Product Delivery Inspection Note, and Requisition Note. Dealerships in the UK still operate within a territory, and their customer data is considered confidential both from the VM and other dealers. It is suggested that integration is possible if DMS system architects can be persuaded to use a common middleware implementation with DCS capable of masking specific data streams. This means that each system must retain the facility for hiding certain information fields from other organisations. Extensible mark-up language (XML) will be a key enabling technology in this situation. XML is a universal standard for representing any kind of structured data. ‘Markup’ means the insertion of information into a document to convey information about its contents. The power of the language is that users can access documents in an intelligent manner based on the grammar they use. Thus, specific files can only be accessed according to a standard, predetermined syntax.
Order visibility beyond stocks held in VM compounds and distribution is highly variable, but all dealers can see UK market stock for their franchises. If the vehicle required by the customer is visible in the stock locator, such as held in a compound or at another dealer, then response is either instantaneous or reasonably quick. Once a factory order has been placed, some versions of DCS can provide data further upstream in production, but feedback can be slow. If the vehicle is in the VM pipeline, then it can take 24 to 48 hours for this information to be given to the dealer. Some dealer franchises can see into production, others cannot. Some dealers can see all orders in the pipeline; some can only see their own orders. Some VMs raise all stock orders; some systems operate totally from dealer orders, requiring the dealer to phone another dealer who has an unsold model and ‘agree to a swap’ before they can acquire it and specify it for their customer. All systems allow different levels of order specification amendments. Many DCS either do not give a delivery date, or have significant time delays in confirming them, which is a particular problem for customer-built orders. When dealers are given delivery dates on the system, these can and often do change and are not guaranteed. Dealers will add on days for quotation to the customer to compensate. Dealers and customers need a search facility for stock and pipeline that supplies the information they need when they want it. If they request a certain specification, product mix, and delivery date, they want to know whether these service needs can be met, and what near matches are also available.
Uncertainty Over the Internet for New Car Sales
Dealers are in an uncertain position on how to embrace the Internet for new car sales. The introduction of vehicle selling over the Internet is seen as a threat by some dealers, concerned that they will lose significant market share of price driven, online purchasers either buying direct from the manufacturer or from importers such as Jamjar or Virgin Cars.com. Others feel that brokers such as Autobytel are building an Internet presence for dealers and customers in the gap left open by VMs. These brokers are able to advertise over any geographical area and allow customers to search for value offers by contacting many dealers quickly and efficiently. Some dealers feel that the customer prefers a faceto-face purchasing experience, built on the more traditional concept of ‘service and trust’ and that the Internet cannot replace these elements of new car buying. ‘Clicks and mortar’ summarises the use of the Internet in the USA where it is generally perceived as an extension to existing customer services and where vehicle enquiries are directed to the nearest available showroom. In the UK, leads to customers are being generated for dealers via manufacturer sites and brokers, but very little through their own sites. However, it must also be remembered that the UK is behind the US in most respects in e-commerce, although it is catching up fast. The 3DayCar National Franchised Dealer Association (NFDA) survey highlighted that dealers feel slightly threatened by the Internet, but would have faith in a ‘clicks and mortar’ approach if the manufacturers gave them more of a lead in implementing an integrated online new car sales presence.
logical advances of the last decade, shown by their reliance on manual controls and hard copy documentation. However, to what extent have barriers emerged through a lack of technological integration as opposed to process re-engineering? Significant conflict exists in vehicle retailing at present between the traditional ‘territorial sales’ approach encouraged by the VMs, and the ‘empowered customer’ approach currently being adopted by new entrant IT specialist companies. New entrants who offer customers the facility to trawl for a quote from any number of dealers, are undermining the current system based on local sales territories. Despite the imminent threat of losing market share to importers, the traditional boundaries between VM/dealer and dealer/dealer remain, where system ownership and resistance to sharing is obscuring the potential benefits of a collaborative solutions. One dealer is quoted as saying: “Technology is the easy bit, 90% of our problems are process related”.
“The Internet is the 21st century equivalent of the moving assembly line.” (Jac Nasser, Ford Motor Company CEO, 1999-2001) ‘Batch Processing’ represents a major IT systems barrier to 3DayCar where large numbers of customer orders are processed prior to production at a set time every 24 hours. The current configuration of VMs’ systems typically results in individual mainframe systems updating overnight, processing batches or ‘buckets’ of orders in time intensive cycles that adds four to five days to the order lead time (Figure 4). Due to the fact that the information flow through the batch processing systems is largely un-sequenced, it
Figure 4: Generic IT Map of the Automotive Supply Chain (Howard, 2000)
is possible for the output of one process to miss the start of the next window, adding further time into the process. IT managers confirmed that there was an increasing emphasis on developing the capability for building-to-order. Proposals had been made to ‘speed up’ the system by shortening batch processing periods to around 10 minutes (currently around four hours, overnight). This represents a logical progression for VMs that can avoid scrapping existing databases, and only needing to replace IT software in order for it to handle such a change.
IT System Legacy
Legacy systems were originally built for a ‘different world’ of IT capability, specific tasks, and where technology was associated with ‘control’. Systems today are still driven by in-bound logistics and pushed by production, rather than by order demand. It was found during the research that the total lead-time required to develop, pilot and ‘roll-out’ systems across several continents could be as much as 10 years. Once committed to an operational IT strategy, VMs have little choice but to complete them almost regardless of any changes to the external business environment that may occur during the period. Changes to the IT infrastructure have been achieved in the past by simply ‘bolting on’ additional systems alongside existing mainframe architecture. For example, in the 1990s, PC-based, client-server architecture offered a powerful industry standard on which to base new systems. However, very few of the old systems were ever fully engineered out of the business and switched off, resulting in a mess of complex, overlapping networks. Today a typical production plant runs over 200 separate IT systems; hence, many VMs are faced with an expensive and ongoing burden of replacement and repair of an aging ‘spaghetti’ infrastructure. Often, the fault lies not with the legacy databases themselves (the ‘IBM AS 400’ remains one of the most popular and reliable models from the 1980s), but the network of cabling, applications/software and user terminals, that require replacing without disrupting the order flow. The introduction of ‘middleware’ technology such as Tuxedo (BEA Systems) has significantly increased the flexibility of writing IT applications as business services and linking this information with local area network and Internet-based environments. However, the success of this approach depends on the reliability of the legacy system. Some VMs have begun to recognise the weakness of simply building onto existing infrastructure and are now systematically replacing sections with ‘modular solutions’ that offer a universal platform and the flexibility to accommodate future change. There are products available now that allow core systems to be retained without having to renew the entire network.
series of batch systems operating, such that once an order has entered the system, it is often invisible to the rest of the organisation and other supply chain partners until it reaches the order sequencing or operational scheduling stage. The stovepipe/chimney mentality also extends inside the functions. Manufacturing is a particular example where, once an order enters the order bank, it often cannot be amended, before emerging from production in, at best, eight days time. The extent of IT legacy means that the ability of VMs to move toward a BTO environment is severely limited. VMs are largely governed by a centralised ‘package’ mentality, built around an in-bound logistics optimisation view rather than out-bound customer delivery. There is some evidence of a growing emphasis being placed on removing internal stovepipes and increasing system visibility. However, where this is the case, change is quoted by automakers on a timescale of five to 10 years.
Central Management Systems
Central management systems are popular amongst VMs because of the ease of maintenance and the purchasing advantage gained through economies of scale. However, the time lag introduced at regional plant level, where central batch processing cannot allow for local time differences, can result in higher levels of inventory. The research also found examples of managers flying around Europe looking for the original IT system architect. In some cases, it took weeks to locate and resolve the system query because the problem originated from central VM headquarters, located on the opposite side of Europe. Driven by material optimisation, IT systems are designed for the purchasing and inbound logistics (pull to production) aspects of supply rather than for the flexibility to respond to individual markets (pull to customer demand). Is there a case for examining the balance between central versus regional systems?
“Once you get past Tier one, there aren’t any system standards.” (IT Manager)
address, QTY — Quantity, DTM — Date and time) with free-text fields left for specific comments. Currently, EDI format changes are made by VMs up to three times a year. Suppliers are already receiving messages in about a dozen different formats, all of which must be converted to a common standard before they can be processed internally. This all causes delay and disruption to the system, particularly in the event of a system malfunction. During the research it was calculated that each format change costs a supplier around two ‘IT manager weeks’ of labour. Considerable time is also spent with IT software consultants, where suppliers are understandably reluctant to build and maintain a customised system with diverse inputs from around a dozen demanding customers who seem to change their minds on a whim. Suppliers are currently concerned about the significant costs of IT system administration caused by the undisciplined approach by VMs and the implications of adopting new technology on an unregulated, firm-to-firm basis. With traditional EDI, there is usually no acknowledgment, where messages are sent at a pre-agreed time to ensure the equipment is switched on and operating correctly. Internet communication offers many business-to-business benefits, particularly in areas such as automatic electronic invoicing that may soon become widespread across the industry. However, like many aspects of electronic communication discussed so far, homogenous procedures need to be established by all players.
There is some concern by suppliers who have adopted Web-enabled systems that do not offer sufficient reliability or security to conduct transactions between businesses. A total system failure, whether caused by the Internet or otherwise, cannot be buffered by the low stock levels typically held at most assembly plants. However, Internet security is being improved with the use of a virtual ‘firewalls’ which are inserted between the host organisation’s core communication platform and the external electronic environment. Supply partners wishing to share information must first verify their identity via a password. Some suppliers question whether the Internet is ready to support mission-critical operations, despite its success in other areas of commerce (e.g., Internet banking) where delay in delivering a message could ultimately mean the stopping of a vehicle assembly line. Accountability would ultimately rest with the supplier. The delays and occasional inconsistencies currently experienced in e-mail delivery, not to mention the new dimensions of Internet crime such as hackers and viruses, are regenerating some support for traditional, dedicated ‘machine-to-machine’ links such as bespoke EDI.
Internet technology potentially offers total connectivity and visibility to the auto industry. At present, the component supplier does not know the true demand for his product. The end customer is seen as the VM, not the new car buyer. Current IT systems reflect this: There is no customer delivery date attached to any parts ordering. A component may be used within days of its despatch or may remain in inventory for a considerable period.
In the transition from traditional EDI to Web-enabled systems, the full potential of total visibility across the entire supply chain must be exploited: The system must not simply emulate the original functional, stovepipe/chimney mentality. A recent development is ‘WebEDI’ which emerged in the late 1990s using XML code and standard computers to offer a flexible, low-cost solution for suppliers seeking a connection to other business partners via the Web. It is increasingly used by tier 1 suppliers to overcome the high costs of installing bespoke EDI to connect smaller upstream partners, where improvements in efficiency and responsiveness can reduce the need for safety and buffer stock. Questions remain about how suppliers will fit into the consortium automaker portals like the ‘Covisint’ Internet trade exchange founded in 2000 by Ford, General Motors, and DaimlerChrysler (also known as a portal or ‘e-hub’). Some industry observers think a single automotive e-hub will evolve linking everything from the lowest-tier suppliers to dealers. Others believe a variety of exchanges will emerge, not a single online marketplace that dominates the industry. To date, the fortunes of the e-hub in the auto industry have been mixed, with Covisint suffering from a Federal antitrust case, delays in the introduction of new technology, and departures of a succession of CEOs. Current total investment in the hub stands at $500 million with still no sign of the business reducing its losses (ANE, 2004). Yet ‘SupplyOn’ is a successful third party managed hub originally founded by Bosch, which currently serves 2,700 suppliers. It receives a monthly subscription from each firm in return for the provision of electronic training, a connection service, and online collaborative product engineering software.
“E-commerce and IT will wring out cost from our massive supply chain management systems.” (Ford press release on their alliance with UPS Logistics, USA, 2000)
System Integration & Data Quality
In-bound logistics IT systems for materials and components to be delivered to the assembly line are more developed than out-bound vehicle distribution, but this contradicts the value of the goods carried. Despite having their own systems, the lack of contractual commitment given to logistics providers by VMs on out-bound delivery promotes a short-termism that hinders long-term investment and the development of new IT. Vehicle manufacturer annual capacity is based upon a production plan and sales forecast, which will differ over the year. Capacity judgments by the logistics provider are based more on risk management than firm data. More information is required by logistics providers prior to vehicle release from production. The key issue is poor data on projected volumes and resource planning, as part of the general quality of advance information from VM’s central control.
In supplier-to-supplier logistics, or in-bound logistics, to the VM assembly plant, there are no universal standards in terms of Odette label formats. Upon receiving an EDI transmission, labels are individually printed off, attached to a crate or ‘stillage’, and the
bar-code portion scanned before departure and upon arrival. However, subtle label format differences require significant levels of VM-specific knowledge by all individuals who come into contact with the system, creating confusion and inevitably resulting in time lost during the process. Converting electronic data to ‘hard-copy’ documentation is becoming a very lengthy process for suppliers faced with delivering vehicle parts to depots in transit to other destinations. Typically seven duplicate copies of documentation per part are required, specifying carrier, warehouse, depots, and final destination. Some areas of inbound delivery work well, assuming constant demand, such as sequenced in-line supply. However, it will be some time before logistics providers, suppliers and VMs achieve a truly ‘paperless revolution’ where electronic tagged containers automatically trigger a goods-received message as a truck completes its delivery, which in turn sets off a sequence of electronic billing.
Connectivity: Wide Area Networks & Extranets
Lack of connectivity is the main technical obstacle, particularly in current outbound logistics. Wide Area Networks (WANs) will replace Local Networks (LANs) and these WANs can be combined with company intranets to provide a shared space, an extranet, portal, or electronic hub. The combined power of e-hubs can be harnessed to provide a high portability of information, ease of transfer and access, eliminate re-keying of data and time lost on updates. The replacement of traditional LAN-based access within companies by e-hubs will allow increased portability of information, reports and real-time data exchange between departments increasing their ‘single world view’. This should improve the chance of noncontradictory messages from VMs being exchanged with supply chain partners, a common issue for logistics providers for out-bound transport. A common platform is needed to facilitate sharing spare capacity, particularly on return runs (called ‘back loading’) between rival haulage firms, although this requires significantly higher levels of trust between VM/logistics and logistics/logistics partners.
takes into account fleet capacity. However, this represents a significant speculative investment for logistic providers.
CURRENT CHALLENGES FACING THE ORGANISATIONS
Organisations wishing to begin the transition from mass production to build-toorder face a number of significant challenges. This is because the problem lies less with ‘technology’, and more within the people who use it. A mindset change is needed away from vertical, hierarchical reporting and the optimisation of only part of the system (i.e., production) and toward embracing the concepts of information transparency and responsiveness from the perspective of the end customer. Four core challenges for the automotive industry as a whole are raised here: 1.
There is considerable work to do in Europe (and the US) in building an electronic infrastructure that overcomes the proliferation of standards and protocols that creates so much additional work for supply partners. A situation where information systems expand through poor regulation, such as the case with bespoke EDI, must not be repeated with Web-enabled e-commerce. Better measures are needed to encourage supply chain collaboration and the adoption of inter-organisational systems over the nature of the realised benefits from building-to-order, and a clearer vision over who — other than VMs — are likely to share in them. A coordinated adoption of information systems across multiple stakeholders is needed, driven by ‘electronic leadership’ skills that are currently lacking at boardroom level. The premature departure of Jac Nasser as CEO of Ford shows that even top executives are not immune from the outcome of business decisions involving the Internet. In order to meet the requirements of 3DayCar and build-to-order, there must be a reduction in the number of processes that an order goes through prior to production. Eventually, customer orders should be treated as ‘batch sizes of one’ capable of being handled in real time. Hence, a major challenge facing the automotive industry today is how to adopt an Internet-enabled inter-organisational system that supports total supply chain transparency and connects all stakeholders with the customer (Figure 5).
Figure 5: Core Information Systems for 3DayCar Shared information:
- Demand management - Direct customer order booking - Planning & scheduling Tier 3 Supplier
Tier 2 Supplier
WWW / Web-EDI Tier 2 & 3 supplier service levels
Order fulfilment platform / e-hub Tier 1 Supplier
Dealer Vehicle Manufacturer
ANE — Automotive News Europe. (2004). A decimated Covisint is put up for sale. p.17. ATKearney. (2003). Lean distribution in the United Kingdom. www.prnews.com/cnoc/ ATKlean Business Wire (2001, Feb). Gartner survey shows US consumers prefer concept of buildto-order when buying an automobile. Crain, K. (2002, Oct 15). Global market data book. Automotive News Europe. Holweg, M., & Pil, F. (2001). Successful build-to-order strategies start with the customer. Sloan Management Review, (Fall), 74-83. Howard, M. (2000). Current information technology systems: The barriers to 3DayCar. 3DayCar sponsor report. Ref: T3 – 7/100. Online: www.3daycar.com Womack, J., Jones, D.T. & Roos, D. (1990). The machine that changed the world. Rawson Associates. 3DayCar — www.3daycar.com
Digital Devices, Inc. was founded in 1965 in Cambridge, Massachusetts, by Ray Stata and Matt Lorber. In 2003, the company was acknowledged as one of the leading designers and manufacturers of high-performance linear, mixed-signal and digital integrated circuits (ICs), which addressed a wide range of signal-processing applications in the electronics and related industries. Digital Devices is headquartered in Norwood, Massachusetts, and has a significant global presence in all major markets in the electronics industry. The company has numerous design, manufacturing and direct sales offices in over 18 countries and employs more than 7,200 people worldwide (Figure 1). The company’s stock is traded on the New York Stock Exchange and is included in the Standard & Poor’s 500 Index. Many of Digital’s largest customers buy directly from the company, placing orders with its sales force worldwide; the remainder obtain their products through distributors or over the Internet. Just fewer than 50% of Digital’s revenues come from customers in North America, while the balance came from customers in Western Europe and the Far East. Ray Stata, Digital’s co-founder and longtime CEO, recognized the importance of fostering a culture of openness, where employees were empowered and encouraged to be innovative. This was reflected in the company’s structure, which exhibited a high degree of process decentralization, especially in the allocation of capital and operational budgets, and, in particular, the locus of decision making. Figure 2 illustrates the company’s structure: the core business functions are the ‘product line’ Computer Products Division, Communications Division, Standard Linear Products Division, Transportation and Industrial Products Division, and the Micromachined Products Division, which was taken over by Ray Stata when he stepped down as CEO. Shown directly beneath these are corporate business divisions that provided support for product line divisions. It is of significance that Human Resources and Finance Divisions aside, all support divisions were engineering oriented, even the World Wide Sales and Corporate Marketing and Planning Divisions. This engineering-oriented culture was to have Figure 1: Digital Devices Inc. Worldwide Design, Manufacturing and Sales Functions
Figure 2: Digital Devices Inc. Organizational Structure (as of 2000)
profound implications for IS development and governance in several areas of the company’s operations, as will be seen. Since its inception, Digital Devices gained a reputation as an excellent employer, where employees were respected, well remunerated and benefited from lucrative stock options. Individual commitment to the organization was manifested in the low level of staff turnover and the lifelong employment of many senior employees and engineers. The vast majority of employees remained loyal to the company despite the large salaries and attractive bonuses on offer from competitors. Significant too was the low level of turnover in employees from areas like sales and marketing, which was comparatively high in other companies in the sector. In December of 1997, Fortune magazine selected Digital Devices as one of the top 100 companies to work for in America and, later, in 2000, Fortune named the company as one of America’s most admired companies.
the main section of this article then describes the origins of the political tensions surrounding IS development and associated issues of governance. These subsections are followed by three that describe how the various ‘actors’ and their ‘communities-ofpractice’ participated in the development and implementation of: (1) the sales and marketing component of the company’s Intranet and (2) the Corporate Web-presence. The evidence adduced in describing these complex IS development ‘dramas’ facilitates an understanding of the roles that power, political conflict and commitment play in shaping both the development process and its product — these are discussed in the penultimate section. The case therefore provides a real-world example of the ‘reality’ of systems development in innovative organizations.
The IS Function
The company’s IS function was located at corporate HQ in Norwood. Unlike senior executives in the sales or marketing divisions, the senior IS executive, the CIO, reported to the VP of Finance, the CFO (Figure 2). This is important, as most large organizations in the US had established relatively autonomous IS functions by the mid-1990s. Product line and support divisions at Digital Devices had IS managers and IT professionals dedicated to take care of their particular IS needs and IT infrastructure support. For example, the Sales and Corporate Marketing divisions had one IS team to take care of their Sales and Marketing IS requirements: however, in all cases report relationships of IS staff were to the CIO and thence to the CFO. The following overview of IS operations at Digital indicates the outcome of this structural arrangement. In the early 1990s, Digital’s IS were centralized and based around an IBM mainframe. In this scheme of things, the role of IS was to gather corporate data. Subsequently, Digital’s major business systems were based around SAP-packages. The first SAP module was implemented in 1994. In that year, the IS function also decided to standardize the desktop platforms in use across the organization, in order to provide all users with a common suite of applications and lower the total cost of ownership. Although many end-users preferred the Apple Macintosh platform, the decision to go with the PC hinged on the paucity of business applications for the MAC. So, while there was some opposition to this strategy within the organization, Digital opted for Microsoft Windows-based PC platforms worldwide and rolled out Banyan-Vines Network Operating System on the local area network. The one exception to this strategy of standardization was the engineering community in the product line and R&D divisions, who used Sun UNIX workstations. At the end of 1998, there were about 4,000 Windows-based desktop PCs and approximately 2,000 Sun Unix workstations in Digital’s IT infrastructure. It was considered by many that Digital had a state-of-art IT infrastructure, although others were of the opinion that the same could not be said of IS support for areas like sales and marketing.
Central Applications: The Nexus of Sales & Marketing Product Knowledge at Digital Devices, Inc.
divisions had their own marketing sub-functions. The company’s IT infrastructure and related IS — that is, its Internet e-commerce and e-Business application, corporate intranet systems, and emerging sales and marketing information systems — played a major role in helping the sales and marketing operations deal with the large number, and wide geographical dispersion, of Digital’s products and customers. Based in Wilmington, a suburb of Boston, the Sales Division’s Central Applications function was the corporate nexus for all product-related knowledge at Digital Devices. It was through this function that sales and field engineers, in addition to product distributors, were trained and supported. It also had close functional relationships with the marketing engineers from the various product line divisions. This function also provided technical support via 1-800 toll-free lines direct to Digital’s customers. Each day it accepted and processed about 200 technical support questions from customers, and recorded each and every call. Central Applications also advertised new products, mainly at technical seminars, and through this forum it reached about 10,000 design engineers every year. The function also provided a fax-back service to customers — here, customers faxed in a request for data sheets1 and these were automatically dispatched by fax in a matter of hours. Application engineers also used the information contained in the data sheets of over 2,000 new and established products to compile the company’s short-form product catalogue and the related CD-ROM. This product data was also published on the company’s Internet Web site, and later became the preferred method of access, thus replacing Fax-Back. One of Central Applications’ key roles was in providing product support and technical information over the corporate intranet with its own Lotus Notes-based product and technical support application. Because of the need to better manage customer-related call tracking, customer contact, and product application problem-solving, this system evolved from a client/server platform into a web-based solution. Since its inception as a client/server system, this application, which consists of several separate but related databases, has been extended and ported to the corporate intranet via Lotus Notes Domino Server.
Why Engineering-Oriented Business ‘Communities-ofPractice’ Generally Held the Balance of Power in Shaping IS/IT Infrastructures
research and development budget been cut…all the guys come from the same universities, from the same professors and they all have been taught the same things. The common background in electrical and electronic engineering provided social actors with a shared language that facilitated communication and learning across functional areas within the organization — however, there were obvious differences in objectives between engineering and non-engineering ‘communities-of-practice’ that led to a degree of institutional tension around IS development and the deployment and operation of IT infrastructures. Such differences were reflected in the way IT resources were employed. For example, while engineers in the product line divisions used the corporate LAN and WAN infrastructure, they were relatively independent in terms of the computer platforms and applications they used. The IS manager described it thus: This federated decentralized approach to building Digital’s IT infrastructure resulted from the way in which the company operated since its foundation, where the product divisions and the product lines at the various sites maintained their own IT budgets and tended to provide for their own IT needs. The important point here is that of ownership and control of non-corporate applications rested exclusively with the end-user community, with the IS function acting in support roles only. On one hand, this engendered a local sense of community that helped reinforce each engineering ‘community-of-practice.’ On the other, this independence of corporate IS extended beyond engineers in the product divisions, as is evidenced in the Sales Division’s Central Applications function, which was staffed by applications engineers who developed and operated a key element of the corporate intranet, with the blessing, but not with the support of the corporate IS function.
THEORY & PREVIOUS EMPIRICAL RESEARCH: COMMITMENT, POWER & POLITICS
Three separate but related theoretical perspectives are now briefly explored to help understand the case and associated analysis.
draws on Selznick (1949, 1957), who illustrates that the process of institutionalism gives rise to, and shapes, the commitments of organizational actors and groupings. Selznick (1957) argues it is through commitment, enforced as it is by a complex web of factors and circumstances, and operating at all levels within an organization, that social actors influence organizational strategies and outcomes. Here, ‘commitment’ refers to the binding of individuals to particular behavioral acts in the pursuit of organizational objectives. Selznick identified the sources of organizational commitment viz. (a) commitments enforced by uniquely organizational imperatives; (b) commitments enforced by the social character of the personnel; (c) commitments enforced by institutionalization; (d) commitments enforced by the social and cultural environment; (e) commitments enforced by the centers of interest generated in the course of action. However, these commitments do not evolve spontaneously through the process of institutionalization, they are shaped by ‘critical decisions’ that reflect or constitute management policy: as Selznick illustrates, the visible hand of leadership influences the social and technological character of organizations. Thus, Selznick (1957) maintains that organizational, group, and individual commitments determine whether organizational resources, such as IT, are employed with maximum efficiency and whether organizational capabilities are developed to leverage such resources to attain competitive advantage.
A qualitative, interpretive, case-based research strategy was adopted to conduct this study. This involved a single instrumental case study (Stake, 1995) that was undertaken to obtain an understanding of the circumstances surrounding the design, development and deployment IT-enabled information systems at Digital Devices Inc. Purposeful sampling was employed throughout (Patton, 1990). Research of Digital Devices, Inc. was conduced at three sites located in Limerick (Ireland), Wilmington (Boston, MA) and at the company’s corporate headquarters in Norwood (MA) in midto-late 1998. Fourteen taped interviews were made with a cross-section of ‘key informants’ from business and IS ‘communities-of-practice’ — each interview was up to two hours in length. Additional data sources included documentary evidence and informal participant observation and discussion at the three sites. Elements of Selznick’s (1949) theory of commitment and insights from the literature on ‘power’ were employed as ‘seed categories’ to interpret the interview transcripts and other documentary sources. Finally, the case report approach was used to write up the research findings.
CASE DESCRIPTION: DEVELOPMENT & GOVERNANCE OF IS & IT INFRASTRUCTURES AT DIGITAL DEVICES, INC.
The following case report is structured into four sections, each of which provides a different, but complementary, perspective on the issues surrounding the development of IS and governance of IT at Digital Devices, Inc. The first provides the context for the other three by describing the origins of the political tension between the IS function and some of the ‘communities-of-practice’ responsible for sales and marketing operations in the company. The second delineates the problems with IS governance, while the third and fourth sections then describe the factors that influenced the development, implementation and governance of two IS/IT architectures: the company intranet and the corporate Internet system. The events described in the case occurred between 1996 and 1999.
Political Tension Surrounding the Development of IT-Enabled IS
and subsequently supported them. This happened with an application developed by engineers in the Santa Clara facility — later, that system was rolled out by the IS function, as it had found favor with engineers in other divisions. The Central Applications Lotus Notes/Domino application, which could be accessed through an Internet browser over the intranet, gained acceptance at IS, as it did not interfere with corporate standards due to its use of a Web browser on the desktop: in any event, IS staff refused to support the underlying Lotus Notes Domino system.
addition, the IS managers interviewed considered the CFO to be really unique due to his passion for IT and his understanding of its benefits to the company. They also thought that few senior executives within Digital were as enthusiastic proponents or sponsors of IT as he. Nevertheless, the following statement from the IS Manager for Sales and Marketing is revealing, in that it may indicate where the fundamental cause of the frustration with the IS function existed: I think there would be good agreement that there are areas, especially in Sales and Marketing, that he just does not understand — the soft stuff, customer relationship management [etc.]…There is agreement that he is probably too removed from that side of the business, that he might say: “Well, wait a second, why are we spending money on that?” And well sometimes you know at the high level that many of the vicepresidents communicate when these initiatives are being discussed, but I think what he tends to fall back on is that if the vice president responsible is willing to fund it out of his own budget, and put his best people on it, then he would be willing to support [it]. While the IS function was not held in the highest regard by Sales and Marketing engineers, the reverse was also the case, as both Sales and Marketing (including the marketing sub-units in the product line divisions) tended to go it alone more often than other organizational units when it came to providing their own IT solutions. Nevertheless, IS was always the first port of call whenever new systems were planned in order to determine whether or not the IS function could deliver the desired solution. However, because of human resource limitations and skills shortages, the demand for corporatewide systems, and attendant need to prioritize the systems to be developed, IS was not always in a position to deliver a particular solution.
The Other Side of the Governance Coin
The IS function ran into problems that were not of its own making in undertaking certain projects for business ‘communities-of-practice’. For example, it had been badly burned in the past, with, for example, the original Opportunities, Strategies and Tactics (OST) System for Digital’s sales team and, also, the organization’s sales forecasting system — both of which were failures. Accordingly, the IS function tended to tread carefully so as not to get embroiled in change management problems and resultant system failures. Hence, they adopted a policy that required business areas to appoint a project leader who was highly competent in his field, and who would have top management support, as they did with the successful SAP Logistics and Order Fulfillment System. In this project, a senior manager from manufacturing acted as user project manager, and an IS project manager handled development. The problems that arose in the implementation of this system revolved around the significant change in the logistics process that would effectively eliminate all product distribution warehouses worldwide, save for those at the manufacturing site of origin. The new system allowed for a form of just-in-time manufacturing whereby products were to be shipped directly from the manufacturing site of origin direct to a customer once ordered. As the IS manager responsible outlined:
An IT guy could not make that type of business decision, and an IT guy could not get through political issues in Europe: like saying that we are going to close down that warehouse and make 30 people redundant. The business manager who did that had the support of the vice president of worldwide sales. And that is the struggle I alluded to before, but now we’re in the space of systems where someone comes up with [a] great idea and says well I think we should do this, and I say fine, but who are you going to put into this to run it? And the response might be: “Well I don’t really have any one that I am willing to give up at present.” That is signal to me that the system is not that important. Whereas change management problems were resolved when the SAP system was implemented, the new sales forecasting system was more problematic however, as problems of a cross-functional nature between the manufacturing and marketing functions, and a lack of buy-in on the marketing side, caused the system’s implementation to fail. One of the major problems here was that Manufacturing and Corporate Marketing had separate sales forecasting needs. Furthermore, their existing approaches to forecasting, although separate, were pretty much dependent on each other. In any event, managers from manufacturing locations and the marketing groups participating in the development redesigned the forecasting processes and developed the system around the new processes. However, the system was never used to its full potential because business managers not involved in the design and development were reluctant to change fundamental forecasting processes. Thus, while the new system was implemented, the basic business processes involved in forecasting were never changed. The IS manager responsible for this development project stated that it became “a pass the buck issue” with both marketing and manufacturing. As a result of these implementation problems, responsibility for forecasting was removed from the marketing function, and the relevant planning activities were transferred and integrated into manufacturing processes and then ported back into marketing. Hence, the new planners effectively spanned both functions. The IS manager for Sales and Marketing summed up the situation thus: It seems to me that everyone is always fascinated with new systems, and they believe that a particular solution is going to solve all their problems; and [whether the systems work or not] it all comes down to whether or not the organization is lined up — that the right people, with the right incentives, are in place, and that business managers have thought through what this is going to mean, and so on. In response to the problems they were experiencing with the Sales and Marketing divisions, IS managers wanted to see a single vice president of Sales and Marketing so that there would be coherence, vision and leadership in the planning, development and implementation of Sales and Marketing Systems. The other side of this coin, however, was that such a move had the potential to reduce political infighting and, perhaps, act as a mechanism to impose corporate standards on highly innovative operations like Central Applications.
business and IS managers wished to tap into the potential for intra-organizational communication and learning that such systems offered. Essentially, business users were employing Web-based technologies to share their knowledge of products and customers with each other. In order to develop a strategy that would bring order to the chaos that then existed, the IS function benchmarked its proposed strategy with companies such as DEC, Hewlett Packard, Sun Microsystems and Silicon Graphics. The IS team observed two dominant approaches to implementing intranet technologies in these companies. First, they noted that Sun Microsystems and Silicon Graphics had adopted a laissez faire strategy and basically let staff do their own thing, whereby every workstation had the potential to become an intranet Web site. DEC and Hewlett Packard took a much more disciplined and rigorous approach by instituting a formal strategy that included the adoption of exacting standards, in conjunction with a corporate template that mandated a certain look and feel for each site. The IS function at Digital adopted a strategy that lay somewhere between the two reported. In implementing this strategy, an umbrella intranet site was first established and the representatives of all the other sites were informed of the new policy. Essentially, this involved the observance of some basic guidelines that end-user developers had to follow — these guidelines merely set certain standards for the Web sites. No effort was made to tell users as to what they could or could not place on their sites, but nevertheless, certain policies had to be observed. The IS project manager responsible commented on this endeavor and maintained that it had “worked out pretty well, but there was some duplication of effort. For example, if I need a phone list of people, there are probably 10 of them out there now, and each one, apart from the corporate one, is maintained for people in a particular Web group.” Nonetheless, in response to such issues, and to introduce more functionality and cross-site accessibility, a cross-functional intranet development steering group was established. This group was charged with two tasks— to develop standards and to develop generic tools like a search engine. The group had responsibility also for the formulation of a strategy to guide the direction of the Intranet and to determine what, if any, additional standards needed to be put in place. However, in keeping with the organizational culture, rigid structures were not put in place, nor were Web authors questioned in regards to what they were doing with their sites. Even so, some control was levied over the use of resources to prevent particular groups from monopolizing them and thereby preventing other voices from being heard.
Central Applications Leads the Way in Providing Intranet Support for Sales & Marketing
As indicated previously, Lotus Notes was not supported by the IS function, and because the Digital’s CIO did not want Lotus Notes client software on corporate desktops, it seemed unlikely that the applications developed using Lotus Notes would be of general use to the people that needed them — the sales and field engineers. However, with the advent of the corporate intranet, and with the capability of Notes’ Domino Web-server, the Central Applications product support system came into its own, and such was its success that the product divisions and the product lines looked to Central Applications to host new product information. The IT consultant at Central Applications described it thus:
When Internet technology and Web servers first became available and popular, a lot of people went out and set up their own intranet servers, and it was fun and games for a while. But they soon realized how much work it was to maintain their own sites and keep their information fresh…So what we have done is make it easy for people [by hosting their intranet sites], and the [Central Applications manager] feels that if we keep it easy for people, they will come. In addition to hosting new product data for the product divisions, something that was pivotal in helping sales and field engineers to promote Digital’s diverse product range, Central Applications also hosted intranet Web sites for the product lines, as many of them had neither the time nor the inclination to maintain their own sites. Application engineers supported and input most of the data into the Lotus Notes databases; for example, the sales bulletins, the product problem data, and so on. With the general accessibility of the Marketing Information Central Web site, it was hoped that much of the work of inputting new product status data would be taken over by the entities responsible for the original data such as the product lines and so on.
Creative Tension & Development of Digital’s Corporate Web Presence
One of the major difficulties that arose in relation to the implementation of this Webbased IS centred on the manner in which product details were prepared for publication on the Web. In a move that paralleled the intranet policy at Central Applications, the Webmaster shifted the emphasis from authorship and ownership of all new product data to the product lines. The early successes in deploying what was a new technology led some senior managers to believe: (a) that traditional mechanisms of customer contact were now obsolete; (b) that existing business processes were under threat; (c) and that catalogs, CD-ROMs, and sales engineers were now of little value. The perspectives of IS function managers on the issue of IT support for promoting product data to customers are summed up by a comment from the IS manager for the Internet project: The [Central Applications Manager] does this on the intranet internally, [the Webmaster] is on the Internet site: I think maybe that there is some competition there, I don’t think that is organizationally clear who is responsible for this—it just hasn’t been defined. I don’t think Digital works like that, [Central Applications] have done this for a long time and now [the Webmaster] needs to do this externally. The choices are “I can use his stuff or I can do my own thing”; [The Central Applications Manager], I think, gets and maintains it himself, while [the Webmaster] has the product line people do it for her. [Central Applications are] facing field service engineers while [the Webmaster] is facing the customer. Thus, the absence of an overarching policy on the management of the customer interface at Digital (one direct, the other via sales and field service engineers) led to competition and tension between two important organizational functions. However, this proved beneficial and led to optimal outcomes for customers and field service and sales engineers, as the Web team and Central Applications unit both wished to be perceived as the nexus of corporate knowledge. It must be said, however, that unequivocal top management support helped mitigate many of the problems mentioned and others that arose elsewhere in the organization regarding the new Internet IS, and thereby led to a successful development outcome.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
Finally, while Digital’s intranet strategy was an undoubted success, it had two obvious weaknesses. First, the heterogeneous nature of the Web and data servers meant that it would be more difficult for the IS function to quickly roll-out anti-virus and worm upgrades across different platforms (e.g., Microsoft’s Internet Information Server (IIS), Lotus Domino, Apache, etc.). Thus, weak links could exist that would compromise the company’s local and wide area networks (LANs and WANs) and cause data loss. The challenge here for the IS function would be to implement a strategy that migrated nonstandard servers to the corporate standard(s), and introduce automated anti-virus upgrades and other means to protect valuable corporate data repositories. Second, the case description of Digital Devices’ intranet and Internet infrastructures indicates that the company’s knowledge resources were not well integrated, in that there existed islands of knowledge stored in diverse data repositories — something that is in contravention of knowledge management practice. Problems of duplication of effort and data inconsistency aside, a major challenge for Digital’s IS function is to protect this learning organization’s most valuable resource, knowledge of its core business processes and products.
Abrahamsson, P. (2001). Rethinking the concept of commitment in software process improvement. Scandinavian Journal of Information Systems, 13, 69-98. Butler T., & Fitzgerald, B. (2001). The relationship between user participation and the management of change surrounding the development of information systems: A European perspective. Journal of End User Computing, (Jan-March), 12-25. Butler, T. (2003). An institutional perspective on developing and implementing intranetand Internet-based IS. Information Systems Journal, 13(3), 209-232. Cavaye, A.L.M. (1995). User participation in system development revisited. Information and Management, 28, 311-323. Jasperson, J., Carte, T.A., Saunders, C.S., Butler, B.S., Croes, H.J., & Zheng, W. (2002). Review: Power and information technology research: A metatriangulation review. MIS Quarterly, 26(4), 397-459. Keen, P.G.W. (1981). Information systems and organizational change. Communications of the ACM, 24(1), 24-33. Kling, R., & Iacono, S. (1984). The control of information systems after implementation. Communications of the ACM, 27(12), 1218-1226. Markus, M.L. (1983). Power, politics, and MIS implementation, Communications of the ACM, 26(6), 430-444. Patton, M.Q. (1990), Qualitative Evaluation and Research Methods. London: Sage. Sabherwal, R., Sein, M.K., & Marakas, G. M. (2003). Escalating commitment to information system projects: Findings from two simulated experiments. Information and Management, 40(8), 781-798. Selznick, P. (1949). TVA and the Grass Roots. Los Angeles: University of California Press. Selznick, P. (1957). Leadership in Administration. New York: Harper and Row. Stake, R.E. (1995). The Art of Case Study Research. Thousand Oaks, CA: Sage.
Winograd, T., & Flores, F. (1986). Understanding Computers and Cognition: A New Foundation for Design. Norwood, NJ: Ablex Publishing Corporation.
The product data sheets contained detailed descriptions and specifications of products; it is therefore a vital component in making sales, as customers require this information to match products to their specific design needs.
Development of an Information Kiosk for a Large Transport Company: Lessons Learned Pieter Blignaut, University of the Free State, South Africa Iann Cruywagen, Interstate Bus Lines (Pty) Ltd., Bloemfontein, South Africa
An information kiosk system is a computer-based information system in a publicly accessible place. Such a system was developed for a large public transport company to provide African commuters with limited educational background with up-to-date information on schedules and ticket prices while also presenting general company information in a graphically attractive way. The challenges regarding liaison with passengers are highlighted and the use of a touchscreen kiosk to supplement current liaison media is justified. System architecture is motivated and special services offered by the system are discussed. Several lessons were learned regarding the implementation of such a system in general, as well as in this environment specifically. An online survey indicated that the system fulfils its role of providing useful information in an accessible medium to commuters in a reasonable time.
Development of an Information Kiosk for a Large Transport Company
The transport of workers between their place of residence and workplace is a worldwide phenomenon, which has been given a unique twist in the Republic of South Africa due to the policy of Apartheid imposed by the previous government. This policy has given rise to cities such as Botshabelo in the central Free State, 47 kilometers from Bloemfontein, the industrial center of the region. Even though Apartheid has been abolished and citizens are free to stay where they choose, practical necessity dictates that Botshabelo, as a legacy of the Apartheid policy, will remain viable and populated for a long time to come. The workers of the central Free State, settled far from their workplace and unable to afford private means of motorized transport, use public passenger transport. In the absence of commuter trains in this region, this need is addressed by 16-seater minibus taxis and buses.
Interstate Bus Lines (Pty) Ltd. as a Major Transporting Company
The public transport service provider in the central Free State, Interstate Bus Lines (Pty) Ltd. (IBL), was founded in 1975 as Thaba ‘Nchu Transport, and has since grown to a major company with 508 full-time employees. IBL operates a fleet of 62 train and 134 standard buses from the cities of Botshabelo, Thaba ‘Nchu, and Mangaung to the terminal building at Central Park in Bloemfontein. A train bus pivots around a center point and may carry 110 seated passengers, whereas a standard bus is rigid and is designed to transport 65 seated passengers. IBL operates 702 trips daily in this area and transports 70,500 to 80,000 passengers weekly between their homes and workplaces (see the map of the operational area in Figure 1). Depending on the exact area where a person lives or works, it may happen that he or she will have to take more than one bus for a one-way trip. At Central Park, commuters Figure 1: Operational Area of IBL
may transfer to other buses heading for various businesses and factories, as well as the traditionally white suburbs. Central Park also serves as the main ticket selling point. IBL also provides other services to its community, for example, unsubsidized feeder services in Botshabelo, as well as between Botshabelo and Thaba ‘Nchu, and the transport of students to and from schools. Passengers living to the north and south of Thaba ‘Nchu are also provided with transport services (Figure 1). Interstate Bus Lines has a fleet of buses fully committed to peak demand, with allowances made for workshop allocation. IBL is equipped to render a service, along with the minibus taxi industry, to satisfy the public demand. IBL itself services and maintains its fleet of buses. The bus fleet is of the most modern in the Republic of South Africa, with an average age of 4.8 years per bus. IBL trains its own drivers during an intensive six-week course. Drivers receive annual retraining and evaluation, lasting a further two weeks. IBL’s income is based on ticket sales along with a government subsidy and a small special trip market, and averages around USD 2 million per month, which amounts to an annual turnover of around USD 24.2 million. IBL’s fleet of buses travels approximately 14.5 million kilometers per year while consuming 5.6 million liters of fuel. Services run from 04:15 to 22:30 daily. The peak requirement is from 06:00 to 07:45 in the mornings and from 15:30 to 18:00 in the afternoons — peak being defined as the time when the total bus fleet is fully committed to the transport of commuters. Figure 2 indicates the times when the largest concentration of commuters passes through Central Park. All commuters must be in possession of a valid ticket, purchased from the Central Park ticket kiosk. These tickets are daily, one-way, weekly or monthly tickets.
Figure 2: Average Number of Passengers Passing Through Central Park per Half Hour Interval During Week Days
Development of an Information Kiosk for a Large Transport Company
Interstate Bus Lines conducts frequent surveys in order to determine the profile of its passengers. According to the most recent survey (Breytenbach, 2003), passengers are mostly African (89.1%), Sotho-speaking (51.0%), female (52.3%), and are predominantly domestic workers (20.8%). According to the living standards index, 95.8% have electricity in their homes, and 63.2% earn between $135.00 and $275.00 per month. The average educational level of commuters is grade six, although 27.3% of passengers are enrolled for a post-school qualification. Less than 10% of IBL’s passengers have access to computers and less than 5% have access to the Internet, either at home or at work. IBL transports predominantly domestic workers, although students, part-time workers and shoppers constitute almost half of the company's clientele base (Figure 3). Many passengers are younger than 30 years of age and are studying or have just completed some form of education.
Business processes are fully supported by information technology. IBL uses a system of electronic ticket machines (ETMs) and all ticket sales are recorded in a database. Tickets bear a magnetic strip and when a passenger boards a bus, the ticket is swiped and the passenger recorded for the specific route and trip. Special electronic equipment is installed in the gearbox of every bus that records movement times and speed. The raw data from these devices is processed and used to prepare statistics on passenger numbers and kilometers traveled on a monthly basis. These statistics are then used as proof for the monthly government subsidy claim. On the passenger liaison side, however, IT applications are non-existent. One would think that potential for a web page exists but the lack of access to computer technology and the limited computer literacy of commuters would make this a futile exercise.
SETTING THE STAGE Need for Communication
Dora Ramakoatse works as housekeeper in the home of a wealthy Bloemfontein business man. She is 46 years of age and her native tongue is Sesotho. She also speaks
Figure 3: Graphical Representation of IBL’s Clientele Base
some Afrikaans and English. She has formal school education up to grade 5 and has a limited reading capability. Dora lives in Botshabelo and every morning at 5:00, she boards a bus for a trip of some 50 kilometers to Central Park. There she transfers to another bus for the suburb of Hospital Park where her employer and his family live. At about 6:30 she is just in time to wake the kids, prepare breakfast and start her daily duties of washing, cleaning and ironing. At 15:30, she once again takes a bus to Central Park and from there to Botshabelo. At around 17:00, she arrives at her own home, exhausted but still with the responsibility of seeing to the basic needs of her own family, such as preparing meals for the evening and the day to come. Dora is entirely dependent on the commuter service to get to and from her workplace. She used to complain about buses being late or not even turning up for scheduled trips. In the winter, she has a problem with buses being extremely cold in the early mornings. Sometimes she complains of buses being overloaded or having reckless or impatient drivers. Being dependent on a basic minimum wage, annual tariff increases hit hard and she does not always understand why these increases are necessary. What Dora needs is to be able to express her complaints in a medium that will allow her to comment immediately when the problem occurs. Also, Dora needs the assurance that her complaints will reach those who can do something about it. This medium should not only allow commuters to give feedback on IBL’s services, but should also present commuters with information on schedules and ticket prices.
Communicating with Commuters: The Traditional Approach
Development of an Information Kiosk for a Large Transport Company
that the brochures are often outdated before they hit the streets. People also remove the brochures from the buses and notice boards. George confirms that it is difficult to replace brochures on just about 200 buses with updated ones at short notice. He also confirms that he cannot keep up with answering all the queries to the satisfaction of both IBL and its clientele on a regular basis. He needs a way to disseminate information regarding route and schedule changes, tariff increases, and so forth at short notice.
Communicating with Commuters: Alternative Approach
After consulting with George, passenger representatives and local authorities, Interstate Bus Lines approached the Department of Computer Science and Informatics (DCSI) at the University of the Free State (UFS) early in 2003 to assist in a search for a solution. A touchscreen information kiosk (Figure 4) was identified as a way of supplementing the communication techniques mentioned previously while at the same time overcoming some of the disadvantages of these techniques. Additionally, it could also serve the purpose of improving the company's corporate image among commuters and could expose previously deprived people to technology. The idea was that the system should present commuters with the opportunity to query a database regarding routes, schedules and ticket prices. During periods of inactivity, the system should present general company information on a continuous basis. As an added bonus, advertisements from shops in the center could be displayed. Passers-by or people queuing in front of a ticket box should then be able to follow the presentations which are accompanied by attractive graphic animations and sound or video clips. It was imperative that the system be updatable on a regular basis as ticket prices increase, timetables change, shops change their special promotions, and so forth. Furthermore, it should be possible for George and other staff members of IBL to do these updates themselves after the developers of the system have handed over the completed system. In an attempt to conform to these requirements, a user-friendly switchboard application was integrated with a set of presentations and a single-user database Figure 4: Touchscreen Information Kiosk at IBL Ticket Office in Central Park
package to set up a comprehensive information kiosk while allowing maintenance by people with end-user skills only. Because of the centralized location of the ticket office, it was not necessary to connect the system to other information kiosks. Also, because of the non-existence of a web page, no need existed to connect the system to the Internet. Since the complete timetable is in any case maintained in an Access database for scheduling purposes and driver instructions as part of the business process, it was worthwhile exploring the possibility of connecting the kiosk system to this database by means of a local area network. This case study outlines the impediments, design issues and pitfalls that were encountered during the implementation of the system.
Information kiosk systems are computer-based information systems in publicly accessible places, offering access to information or transactions for an anonymous, constantly varying group of users with typically short dialogue times and a simple user interface (Holfelder & Hehmann, 1994). Depending on the nature of the business, the presence of an information kiosk could be a necessity, but most probably is a supplement to existing means of communication. The success of such systems depends largely on the attractiveness of their user interfaces, how easily they allow access to information and how clearly the information is presented (Borchers, Deussen, & Knörzer, 1995). It is no use having the technology with all the latest information if Dora and others like her cannot access it or do not understand how to use it.
Classification of Information Kiosks
Borchers, Deussen, and Knörzer (1995) propose a classification of kiosk systems according to their major tasks: •
Information kiosks have the primary goal of providing information in a limited subject field, for example, at a railway station where users can find information on a connection to a chosen destination. Users of such systems use the system because they need the information — they do not have to be extrinsically motivated to use the system. Advertising kiosks are installed by companies to present themselves or their products to the public in an attractive and innovative way. The missing initial motivation of potential users has to be compensated for by a visually attractive design. The contents should be interesting and entertaining and should motivate the user to explore the system further. Service kiosks are similar to information kiosks with the added functionality of information entry by the user, for example, hotel reservation systems where data such as names and addresses have to be entered to make a booking. Entertainment kiosks usually do not have a specific task apart from entertaining the user.
Development of an Information Kiosk for a Large Transport Company
Borchers, Deussen and Knörzer (1995) acknowledge that most information kiosk systems will belong to two or more of the previously mentioned classes. The system that was developed for the IBL ticket office can typically be regarded as both an information kiosk and an advertising kiosk with the potential to include the functionality of a service kiosk at a later stage.
System Architecture Development Tools
Several possibilities regarding the development tools were considered in order to develop a system that would fulfill in the needs as expressed previously. A presentation package such as Microsoft PowerPoint® provides for easy end-user updates while allowing attractive graphical animations and multimedia effects. The preparation of an individual slide for every trip makes this, however, an impractical solution. A user-friendly front-end system to query the database for the relevant information or a fully-fledged object-oriented database environment to store graphic images, video clips, and so forth as well as the route and timetable information, could have been developed by professional programmers. Another possibility that was considered was a series of HTML and ASP files to be viewed in a browser. Such an environment would have allowed the combination of graphic attractiveness, the use of multimedia and database querying. Although these tools might have done the job, they would have been difficult to maintain for people with end-user skills only. Finally, it was decided to develop the IBL system by integrating a dedicated enduser application (developed in Delphi®) with a presentation package (Microsoft PowerPoint®) and a single-user database package (Microsoft Access®) inside Microsoft Windows® as operating system. Together with the fact that the complete timetable was already available in Microsoft Access, both PowerPoint and Access were reasonably well known by the staff members of IBL who would be responsible for system maintenance. The integration was done in such a way that commuters would not be aware of transitions between the environments. A diagrammatic representation of the system is shown in Figure 5. A form at the bottom of the screen acts as a switchboard (Figure 6). Buttons on the switchboard allow users to navigate through the system. Some of the options activate MS PowerPoint® presentations in the upper part of the display. These presentations convey general company information, ticket office hours, contact telephone numbers, advertisements from shops in the center, and so on. Other options enable users to query the MS Access® database for route information, ticket prices, schedules, and so forth. The use of a dedicated end-user application as switchboard also allows the programmatic capturing of research data such as frequencies of usage, commuters’ preferences regarding language, typing speed, as well as a limited user profile. These values are saved into the underlying database and are analyzed periodically by the researchers.
Development of an Information Kiosk for a Large Transport Company
needs to management, he has to know what they think and how they experience the services that IBL renders. The information kiosk allows commuters to register comments, complaints, requests, and so forth by means of an on-screen keyboard (Figure 7). The idea for this technique is accredited to researchers at the Human Computer Interface Laboratory at the University of Maryland (Plaisant & Sears, 1992; Sears, 1991; Sears, Kochavy, & Shneiderman, 1990; Sears, Revis, Swatski, Crittendon, & Shneiderman, 1993). Valuable feedback for IBL has been generated in this way. Although many passerbys use this facility just for the sake of playing around and to get the feel of using a touchscreen, some 800 valid and useful comments were registered in the period May to December 2003. Since the information kiosk is available night and day, Dora is now happy that she has a way to express her concerns the moment that she gets off a bus. Also, George has expressed his satisfaction for getting honest and unbiased feedback on a daily basis. Route and timetable information is kept in an MS Access® database. Commuters are able to query the database by means of a form that is displayed full-screen, covering the running presentation windows. Figure 8 shows a screen print of this form.
Lessons Learned During Development
Several sources are available that provides guidelines for the design of touchscreenbased information kiosks (Borchers, Deussen, & Knörzer, 1995; ELO Touch Systems, 2002; ELO Touch Systems, 2003; MicroTouch Systems, Inc., 2000). These guidelines are mostly generic of nature and not specific to a particular package or an integrated approach as is proposed here. During the development of this system, several lessons were learned that proved to be critical in an integrated development approach. Figure 7: On-Screen Keyboard to Allow Free Text Input with Touch Typing
According to ELO Touch Systems (2002), the cabinet in which the touchscreen is installed should be in the company colors, have proper ventilation and should be mounted at a viewing angle that minimizes differences in user height. The design of the cabinet should be attractive and sturdy. In the current study, the system was installed in a metal casing just below average eye-level and mounted against the wall between two cubicles of the ticket office at a slight angle (Figure 4). The screen was placed inside the metal casing in such a way that only the glass part was accessible in order to avoid users tampering with the screen’s adjustment controls. The screen could be closed by a lockable lid after hours. This has worked well and to date no incidences of tampering or vandalism have occurred. The continuous presentations attract the attention of passerbys and are visible to people queuing at the ticket offices. This near-vertical mounting also prevents people from putting objects, for example, food, parcels, and so forth, on the screen.
Development of an Information Kiosk for a Large Transport Company
In the current study, it was found that both these strategies seem to be unnatural to the average user from the IBL commuter community. These users expect a reaction from the system as soon as they press a button. If nothing happens, they tend to press harder without lifting the finger. It was found that the button-mode was the most appropriate strategy for this user group. In this mode, touching the screen is equivalent to pressing and releasing the mouse button. The action occurs as soon as you touch the screen.
The lift-off strategy proposed by Shneiderman (1991) implies that a cursor should be visible on the screen that is not obscured by the finger. The concept of a visible cursor is contradictory, however to the idea of ELO Touch Systems (2003) that there should be no cursor because the user should focus on the entire screen instead of the arrow. The normal pointing cursor for button-aware applications is the northwest pointer, but for PowerPoint presentations, the default cursor for links is the pointing finger. In order not to confuse commuters, it was essential to change the cursor to a consistent graphic throughout the system. A top-down arrow has its hot spot on the bottom tip of the arrow and presents good feedback with regard to the exact item selected since the cursor is not entirely obscured by the finger. To replace the top-down arrow with a hand with finger pointing downward would, however, suggest an awkward physical position. In the end it was decided that the visual clue of a hand with a pointing finger outweighs the disadvantage of occasional obscuring.
Use of Sound
The effect of sound to attract attention and for purposes of feedback is well known (Preece, Rogers, Sharp, Benyon, Holland, & Carey, 1994). The value added with regard to information conveyed is somewhat less, however. Speech has a transient nature. If you did not catch the message then you did not catch it, while written text has the advantage that it can be read over and over again at the reader’s own tempo until he/she understands what is said. At the IBL kiosk, each one of the running presentations had accompanying sound effects and narration. The facilities that expected user input had short written as well as spoken instructions. Users could at any time press a button to listen to the instructions again. Touchscreens have no tactile feedback like a button on a soft microwave oven keypad that gives when pressed or a button on an elevator’s keypad that can physically be depressed. Sound effects (e.g., an audible click with every valid press) and display changes (e.g., a button that is displayed differently when selected) are important to inform the user that the input was accepted. The placement of the loudspeakers presented a problem. The speakers were placed inside the metal casing, facing the ventilation holes at the sides of the unit. The kiosk was placed in a public foyer with very bad acoustics, noise from nearby stores, and even a night club. Users complained that they could not hear properly, even with the volume setting at its highest. It would have been better had the casing been made a little wider so that the speakers could fit in next to the screen, facing towards the users behind a grid similar to the ventilation holes. A sound amplifier would also have been an improvement.
One Window with Many Inputs or Many Windows with One Input Each?
Due to users’ limited previous exposure to technology and limited educational background, the need to keep the user interface simple and easy to use was identified from the start. For example, it was thought that, whenever a series of user inputs was required, a single screen capturing all the inputs would provide the simplest user interface. It was found, however, that such a screen confused and frustrated the users. The error messages they got due to incomplete inputs caused them to walk away. This approach was then replaced with one where the user inputs were obtained through a series of modal dialog boxes, appearing one after the other (Figure 9). Each dialog box had only one set of mutually exclusive buttons. This was accepted much more easily as users were guided to answer one question after the other.
Navigating Through a Series of Dialog Boxes
Initially, the series of dialog boxes was provided with a set of Previous/Next buttons similar to the typical setup wizards for many software applications. It was found, however, that users at the IBL ticket office did not understand the concept of going back to edit a previous input, became frustrated, and walked away leaving the system halfway through a sequence. The Previous buttons were then replaced with Restart to allow users to start with the query all over again. The Next buttons were taken away — when the user made a selection the next box in the sequence was automatically displayed. This approach was much better accepted. This might be another confirmation of the findings of Blake, Stevenson, Edge, and Foster (2001), as well as Walton and Vukovic (2003), that African people have a different view of hierarchies and sequences than Western people have.
Control Types: Buttons, Scroll Bars, Sliders, Radio Buttons, etc.
According to Shneiderman (1991), it is possible to use actions other than clicking a button on a touchscreen, for instance, sliding, dragging, rubbing, etc. He asserts that many actions that are unnatural with a mouse are more intuitive with a touchscreen, for example, dragging the arms of an alarm clock to set the time. Shneiderman (1991) discusses evidence that most users succeed immediately in using a touchscreen information kiosk that utilizes these kinds of interactions. Figure 9: Series of Dialog Boxes to Determine User Profile
Development of an Information Kiosk for a Large Transport Company
Our experience with the current study, however, was that users find the single tap on the screen easier than a slide. It was easier for them to adjust the sound volume if the control consisted of a set of discrete values implemented with radio buttons (Figure 6) than when they had to drag a slider along a continuous scale. It must always be kept in mind that users of the IBL system are casual users who are unlikely to use the system time and time again and will, therefore, never have the opportunity to practice the action of dragging.
Do Away with the Windows Task Bar
ELO Touch Systems (2002) recommends that the system should not have a “Windows look”. There should be no indication of the operating system and users should not even think of the system as a computer system. In the current system, this lesson was learned the hard way. Initially, the Windows task bar was always available at the bottom of the screen, even though it was set to “auto-hide”. Some of the users would press on a specific button and then read the information presented on the screen while dropping their hands off the screen at the bottom edge; this sometimes caused accidental touching of the screen again, which caused the task bar to jump up. For computer literate users, especially teenagers, this presented a challenge to fiddle around with the system, edit the PowerPoint presentations, search for evidence of Internet availability, and even close down the system. For computer illiterate users, the sudden appearance of a bar at the bottom of the screen distracted their attention and even made them uncertain about whether some reaction on their part was expected. When the task bar was removed altogether, these problems were largely solved.
Cater for User Ignorance
One of the general design guidelines for information kiosks states that system reaction must be immediate to prevent users from walking away (ELO Touch Systems, 2003). Any presentation must at any time be immediately available on request of a commuter. In this case study, all presentations were activated to run in the memory simultaneously, one behind the other. On request, the appropriate presentation can only be brought forward on the screen with no time delay due to loading. It was also observed that some of the commuters, especially teenagers, just played around with the system, pressing one button after the other in quick succession. Commuters can press the buttons on the screen much faster than they can click a mouse, thereby expecting the system to switch between the various presentations or query the database and display results in a matter of milliseconds. To cater for these scenarios, it was important to ensure ample resources for the computer system. In this case study a PIII 866MHz CPU with 256 MB memory proved to be stable and efficient enough. In a typical information kiosk environment, it is always possible that a specific user may leave the system halfway through a process or query. The next user might then be confused and even unable to get to the information he or she requires. It was, therefore, essential to add functionality for the system to reset itself and to return to the main screen after a period of inactivity.
An online questionnaire allows commuters to select their answers from a set of possibilities (Figure 10). Norman, Friedman, Norman, and Stevenson (2001) provides guidelines regarding the layout of such online questionnaires. Results obtained from this survey revealed that 71.1% of first-time users and 83.8% of follow-up users found the system easy or moderately easy to use. Also, 68.6% of firsttime users and 70.6% of follow-up users indicated that they experienced a positive emotion, e.g., satisfaction or enjoyment, while using the system. A rather low percentage of 40.2% first-time users indicated that they had found what they had been looking for. This was probably because the commuters expected the system to allow them to buy tickets from the machine rather than having to queue for them; something that had not, from the outset, been the intention of the system. The fact that 71.8% of follow-up users indicated that they had found what they had been looking for, confirms that the commuters realize the intention of the system after some exposure. The fact that 70% of first-time users and 88% of follow-up users indicated that they found the information useful was regarded as an indication of commuters’ rating of the value of the system. In a usability study, selected users were asked to search for information on six specific items that were representative of the available information. On average, firsttime users found 90.1% of the requested items correctly while follow-up users found 97% of the items correctly. The fact that follow-up users could obtain these items in about 4½ minutes can qualitatively be considered as acceptable to good in comparison with the two minutes that an expert user took to complete the tasks.
Development of an Information Kiosk for a Large Transport Company
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
When a new IT installation is on the cards, one can always ask the question: Is this a “nice-to-have” or a critical necessity for business survival? How often does one see potential for implementation of an IT solution that promises to work better than the existing manual system with a concomitant saving in time, money and manpower and a positive impact on annual turnaround, but due to the skepticism of management and reluctance of employees to use it, the system is either not implemented or not utilized to the full? This was and still is one of the biggest stumbling blocks that needed to be overcome with this kiosk system. Despite the fact that management and the PR, George, are convinced that the system adds value to the company, despite the fact that the usability results reported favorably to a large extent, and despite positive feedback from individuals such as Dora Ramakoatse, there are still those who are skeptical about the system. This was especially evident from employees at the ticket office who were assigned the task of ensuring that the system runs smoothly on a day-to-day basis. They were trained to restart the system in cases of power failures or system crashes. Those employees are, however, not enthusiastic about the system. It was found that, if one of the researchers or George or another member of management does not visit the installation at least once per week to prove interest in and the importance of the system, the employees do not consider the effort to keep the system up and running worthwhile. They view the system as an extra (unnecessary) burden that is nice to have but not essential. Because of this lack of interest on the part of IBL employees, it was extremely difficult to find an individual who was willing to take responsibility for the system on a continuous basis. This meant that the researchers could not easily withdraw from the system once their job was done. Because of the fact that ticket salespeople work with huge amounts of cash, and because of the high crime rate in the area where the ticket office complex is located, intensive security measures are in place. Among other things the whole environment, inside and out, is under constant surveillance of closed circuit television (CCTV). Because of the risk of insiders working together with criminals to deactivate the CCTV prior to a robbery attempt, the computer for the CCTV system is behind a security door where not even the sales people can enter. For reasons of limited space and power outlets, as well as the existence of a heat extracting fan, the computer for the information kiosk is also behind this security door. This isolation of the information kiosk computer causes a certain amount of inconvenience. It happens quite often that a power failure or, especially in the early days, a malfunctioning system causes the system to shut down. The salespeople cannot solve the problem or restart the computer and the researcher has to be called out time and again. The public environment in which the system is functioning implies a high risk for potential vandalism and even theft of equipment. Currently the touchscreen is installed in a metal cabinet in such a way that only the glass part is accessible. Furthermore, the touchscreen is also undercover by CCTV. The risk still exists, however, that a person with a vandalistic inclination could hit the screen with a hammer during quiet hours and then
disappear. Due to the high crime rate in South Africa, the police would probably not have the time nor the manpower to follow up the TV recording. Much research has previously been done on user interfaces for touchscreen kiosks (Borchers, Deussen, & Knörzer, 1995; ELO Touch Systems, 2002; ELO Touch Systems, 2003; MicroTouch Systems, Inc., 2000; Sears & Shneiderman, 1991; Shneiderman, 1991). This research was, however, always focused on users with average computer literacy skills and exposure to technology according to Western standards. For an interface to be developed for users from the IBL passenger community, special considerations should be taken into account. As previously indicated, these people are mostly from an African community with very limited educational background. One of the initial research aims for this project was to investigate the ways in which a computer interface must be adapted to accommodate users of this profile. To date, no clear-cut set of guidelines could be formulated and it remains an open question as to how much users of this user community gain from a Westernized interface in a non-native tongue that is not always well understood. Commuters who travel the same route every day do not have to consult the system time and again to determine departure times or ticket prices. This means that they would not have any motivation to consult the system on a regular basis, thereby probably missing out on important notices that IBL communicates from time to time. It is, therefore, essential to determine effective ways to motivate commuters to use the system on a regular basis. With reference to the techniques of liaison and obtaining feedback that were in place prior to the implementation of the information kiosk, the ultimate question is whether or not the information kiosk adds value. It is accepted that the existing techniques should not be replaced by the information kiosk and that the kiosk should act as a supplementary source of information, but would the old ways not suffice without the information kiosk? In other words, does the information kiosk system really fulfill the needs of Dora Ramakoatse and others like her?
3M Touch Systems. (2002). Online help of the TouchWare software driver for TouchScreen monitors. Version 5.63 SR3. Blake, E., Steventon, L., Edge, J., & Foster, A. (2001). A Field Computer with Animal Trackers. Presented at the Second South African Conference on Human-Computer Interaction [CHI-SA 2001]. Pretoria, South Africa. Online: www.chisa.org.za/chi-sa2001/ chisa2001new.htm Borchers, J., Deussen, O., & Knörzer, C. (1995). Getting it across: Layout issues for Kiosk Systems. SIGCHI Bulletin, 27(4), 68-74. Breytenbach, H.J. (2003). Interstate Bus Lines Passenger Survey: Executive Summary. Passenger survey conducted by independent consultant. ELO Touch Systems. (2002). Keys to a successful Kiosk application. Accessed January 5, 2003: http://www.elotouch.com ELO Touch Systems. (2003). Touchscreen application tips. Accessed January 5, 2003: http://www.elotouch.com/support/10tips.asp
Development of an Information Kiosk for a Large Transport Company
Holfelder, W., & Hehmann, D. (1994). A networked multimedia retrieval management system for distributed kiosk applications. Proceedings of the 1994 IEEE International Conference on Multimedia Computing and Systems. MicroTouch Systems, Inc. (2000). Kiosk Planning & Design Guide. Document number 19-251, Version 2.0. Norman, K.L., Friedman, Z., Norman, K., & Stevenson, R. (2001). Navigational issues in the design of online self-administered questionnaires. Behaviour & Information Technology, 20(1), 37-45. Plaisant, C., & Sears, A. (1992). Touchscreen interfaces for alphanumeric data entry. Proceedings of the Human Factors Society: The 36th Annual Meeting, Atlanta, Georgia (vol. 1, pp. 293-297). Potter, R.L., Weldon, L.J., & Shneiderman, B. (1988). Improving the accuracy of touchscreens: An experimental evaluation of three strategies. Proceedings of the Conference on Human Factors in Computing Systems, Washington DC (pp. 2732). Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., & Carey, T. (1994). HumanComputer Interaction. Addison-Wesley. Sears, A. (1991). Improving touchscreen keyboards: Design issues and a comparison with other devices. Interacting with Computers, 3(3), 253-269. Sears, A., Kochavy, Y., & Shneiderman, B. (1990). Touchscreen field specification for public access database queries: Let your fingers do the walking. Proceedings of the ACM Computer Science Conference '90, (pp. 1-7). Sears, A., & Shneiderman, B. (1991). High precision touchscreens: Design strategies and comparisons with a mouse. International Journal of Man-Machine Studies, 34(4), 593-613. Sears, A., Revis, D., Swatski, J., Crittendon, R., & Shneiderman, B. (1993). Investigating touchscreen typing: The effect of keyboard size on typing speed. Behaviour & Information Technology, 12(1), 17-22. Shneiderman, B. (1991). Touchscreens now offer compelling uses. IEEE Software, 8(2), 93-94, 107. Walton, M., & Vukovic, W. (2003). Cultures, literacy, and the Web: Dimensions of information ‘scent’. Interactions, 2, 64-71.
inactive in terms of URF management responsibility. Instead, URF appointed a CEO to lead the organization. An orchestration of changes has thereby been enacted — namely changes in leadership, financial structure, organizational structure, business process management, and IT. Due to the rapid expansion of the organization, the contracts and grants URF procures with federal and private entities demands an even higher level of research, ideas, and competence to compete with other major scientific and private laboratories. In fact, 94% to 96% of the research dollars generated by URF are contract dollars, unlike in the past where grant-based dollars were more significant. The difference between contracts and grants is important. Contract procurement must be competed with private industry and a good or service must be delivered. Grant procurement is only competitive on the front-end. That is, once a grant is procured there are no deliverables, and thus no competition exists on the back end.
SETTING THE STAGE
The slowdown of the global economy, shrinkage of federal funding available for basic and applied research programs, and increasingly more stringent regulations in federal defense research contracts during the past several years have greatly impacted the ability of URF to compete. Such environmental change has seriously challenged the viability of the URF research and management practices that had been exercised successfully in the past. Taking on that challenge, URF top management launched a large-scale organizational transformation designed to revitalize URF and enable it to continue to grow. It was hoped that URF would be able to reposition itself as a cuttingedge player in the increasingly competitive environment by transforming into a true business-oriented corporation. One overriding strategic goal of the transformation was to ensure better management of intellectual properties (discoveries/technologies) to further secure and expand its business base and continuously increase its capability to compete with other scientific and industrial laboratories. To facilitate this goal, a novel IT (BATON technology) was introduced into the organization with the purpose of streamlining/automating core management processes related to intellectual property and discovery protection. An outside consulting team was secured to lead the IT implementation. Utilization of BATON literally enforces change in the manner in which managers use IT to create (and utilize) contract management processes, identify/secure new ideas and discoveries, and monitor contract/project progress. Consequently, effective utilization of BATON requires a significant change in current practices of the IT department. That is, the IT department must adapt to administer IT in the way that effectively supports newly created management processes. Four key groups were involved in the initial planning and implementation of BATON — top management (essentially the CEO), external IT consultants, business managers, and in-house IT specialists. Each group (excluding the CEO) was assigned roles and responsibilities within each phase of the BATON implementation process. As the case unfolds, it will be made clear how each group reacted to the changes accompanying the novel IT implementation.
A Case of an IT-Enabled Organizational Change Intervention 143
Managing change is frequently cited in the organizational development (OD) literature. Traditionally, three phased elements — envisioning change, implementing change, and managing reactions to change — have been reported in the OD literature as enabling change (Jick, 1993). We use these phased elements as a theoretical lens to frame the IT intervention described in this case. As such, we explore the change management issues occurring in each of the three phases based on the perceptions of change actors. To provide a theoretical foundation that will help readers better understand our case (within an integrated context of OD and MIS), we first introduce the theoretical phases we followed. •
Vision Issues: The foundation of any successful change process rests on a clear vision of how change can be desirable to the future of the organization and how it can be directed and shaped to reach anticipated outcomes (Tichy & Devanna, 1986). However, as suggested by many researchers, not enough effort has been afforded to properly communicating said vision and educating people to share in this vision given that it is intended to stimulate and guide organizational change (Jick, 1993). Without a systematic structure to communicate and translate vision into reality (Graves & Rosenblum, 1987), visionaries will likely encounter skepticism and other negative reactions to change. Moreover, the seeming inconsistency between vision articulation and action by visionaries in leading the change effort merely increases confusion and cynicism among organization members (Richards & Engle, 1986). Implementing Change: Issues involved in implementing change often encompass three elements — supporting structure, change consistency, and the power to bridge the gap between the change strategist’s vision and organization reality (Oden, 1999). To enable change, there must be a supporting structure in place that facilitates the creation of an environment in support of useful and innovative action leading toward realizing the vision (Richards & Engle, 1986). At the same time, consistency in change techniques employed during the process, as perception becomes reality, is crucial to enhance the enthusiasm and morale of the change actors. That is, if what is perceived as strength from one constituency is greeted with more ambiguity from another, the overall perceptions of the change intervention will be negatively influenced (Jick, 1993). Finally, change implementers often bemoan their frustration due to insufficient power to overcome the resistance they encounter to transform the organization into a new paradigm as called for by the change vision (Beckhard & Harries, 1987). Without considering such issues during the implementation process, there exists the potential to derail the course of change (as demonstrated by the consultants’ experience described later). Managing Reactions: Managing reactions to change is probably the most challenging and unpredictable element in a change process. Receptivity, resistance, commitment, cynicism, stress, and related personal reactions must be considered within the framework of planning and implementing an organizational change, as researchers come to realize that organizations, as open systems, depend on hu-
man direction to succeed (Armenakis & Bedeian, 1999). Cases about unsuccessful change programs reported in many studies have exemplified that change without considering the psychological effect on others in the organization, particularly those who have not been part of the decision to make the change, is a major concern (Jick, 1993). OD researchers further point out that if the reactions to change are not anticipated and managed, the change process will be painful and perhaps unsuccessful (Beer et al., 1990). In the MIS literature, managing reactions to change is also cited as a challenging and unpredictable element. Traditionally, IT managers often take a technological imperative perspective (Markus & Benjamin, 1996). As such, “technology is seen as a primary and relatively autonomous driver of organizational change, so that the adoption of new technology creates predictable changes in organizations’ structure, work routines, information flows, and performance” (Orlikowski, 1996, p. 64). Change strategists trapped within this perspective largely neglect the social issues involved in technologybased organizational change. The techno-centric lens offered by this perspective often limits their focus on technological issues and away from human issues such as affective impact of technology on change recipients, behavioral reactions to change, and attitudinal shifts that may occur during a change process (Berney, 2003). However, as the studies on IT-enabled change continue to reveal the importance of the human element in this process, MIS researchers have come to realize that the technological imperative model is not sufficient to effectuate change (Orlikowski, 1996). The most current paradigm of IT-based organizational change intervention, in which organizations employ technology as a mechanism to enact and institutionalize intended change, requires change strategists to heed human issues and respond effectively to the various reactions triggered by the intervention (Jick, 1990). Furthermore, IT specialists are frequently referred to as change agents because they identify psychologically with the technology they create or support (Markus & Benjamin, 1996). Ironically, IT specialists that are stereotyped as being in love with technical change and seem to benefit the most from an IT-enabled change resist such desirable change implementation (Orlikowski, 1994). This paradox has inspired a new stream of research that attempts to monitor/analyze behavioral reactions of IT specialists and explores forces/barriers precipitating resistance to change. Among this research are Markus and Benjamin’s (1996) study classifying organizational beliefs and behaviors of stereotyped groups of IT specialists. Their interpretation suggests that many IT specialists fear that new technologies in the hands of users may threaten their professional credibility and self-esteem. As they explain, “new technology makes these IT specialists vulnerable: unless they know everything about it, they will look technically incompetent when users inevitably experience problems. Further, even when a new technology’s problems are known and tractable, the shakedown period increases their workload and working hours” (Markus & Benjamin, 1996, p. 391). Framed within the foregoing theories, the reminder of this case description articulates the research methodology and our story. The story includes the CEO’s vision that initiated the IT-enabled change intervention, the external IT consultants’ implementation issues, and the resistance from in-house IT specialists toward change. We organize
A Case of an IT-Enabled Organizational Change Intervention 145
our story chronologically to explain what happened during the intervention process and discuss the change management issues critical to the IT-enabled change intervention.
This case study explores an organizational phenomenon — namely a change intervention enabled by IT and the reactions by various constituencies towards the changes during the intervention process. As such, we adopt an in-depth qualitative case study approach to explore the context within which the phenomenon occurred. The procedures for the data collection and analysis process are interwoven within an iterative cycle consisting of interview-analyze-refine-interview. •
Data Collection: The data were collected mainly through unstructured and semistructured interviews. Interview participants spanned across different levels and different functionalities of the organization, including the CEO, deputy directors, external IT consultants, business managers, in-house IT specialists, and research engineers. A contact summary sheet was designed and used for every interview session to keep track of respondent information. Each interview lasted approximately 60 to 90 minutes and was recorded and carefully transcribed. Necessary clarifications with interview participants were made to ensure the reliability and validity of the data collected. Also, we supplemented interview data with on-site observations as well as various written documents (i.e., annual reports, mission statements, and meeting notes). Our data collection goal was to capture actors’ perceptions of the intervention and the associated consequences of their actions as the change unfolds. This process of data collection proved to be efficient as it emphasized problems and issues that emerged during different phases of the change process. Data Analysis: To ensure rigorous data analysis, the case study approach as advocated by Creswell (1998) and Yin (1994) was utilized. Data analysis was integrated with data collection throughout the entire research process. Analysis centered on classifying data into coherent constructs (by identifying both surface and latent change issues), relating findings to existing OD and MIS literature, and generating/refining interview questions based upon the data obtained through prior interviews. Such an iterative cycle of data collection and analysis allowed us to organize new insights, accommodate emergent constructs, refine interview questions, and adjust the research focus accordingly.
We began data collection and analysis for this case study in the summer of 2003. The process during which we iteratively interviewed, transcribed, resolved data discrepancies, and synthesized such information had a duration of over seven months. Such a longitudinal approach is critical to investigating change intervention as it helps researchers capture multiple perceptions/perspectives of change as it unfolds and enables them to develop a cogent lens for better understanding organizations and people (Garvin, 1998).
A Case of an IT-Enabled Organizational Change Intervention 147
do what we are supposed to do,” “as long as we get things done, it is all right, I think.” Such are the perceptions of senior engineers that we interviewed. Even the consultants who worked closely with the CEO on the intervention project and understood the vision well enough to implement it had frustration. “I thought this was a pet-project of his, but it didn’t really turn out to be the case because I didn’t see [the CEO] put it as his top priority.” Without consistent support from top management, the consultants felt powerless and concerned: “in spite of the fact that we are leading this project, there is no structure, and we have no power to push what we know needs to change.”
example, a business manager can create a set of memos of a process as he/she sees it, and these memos, in turn, are negotiated until a consensus is reached by all responsible parties. The memos are then recorded into BATON with help from IT specialists. Each memo contains process steps that describe workflow. Each step is associated with a process-key and each process-key has a unique operational definition (see Appendix B for an example). All process keys are stored in BATON as libraries of process logic trees that allow valid users to navigate said trees. Process keys are really just sophisticated indexes that point to different locations in an overall process that is stored in the BATON system. The logic of a process is defined with a hierarchical tree structure. This tree (as conceptually realized by a manager) is finally translated by system designers into BATON. An integrated system is thus created because the tree represents a process that, once recorded into the system, must be followed by all users of the system. In mid 2002, the CEO decided to hire two IT consultants to facilitate the realization of his vision. He charged the consultants with leading implementation of the BATON technology at URF. BATON was chosen because of its innovative nature and process capability. Such features convinced the CEO that BATON was a good choice because it offered potential to alleviate many of the difficulties inherent in existing process management at URF, and in particular, the intellectual protection process. Charged with the responsibility of BATON, the consultants began the implementation process as well as other required changes. Their initial plan was to present the project to business managers to get them excited about how BATON can help their business. The hope was that the managers would become enthused so that they would rally further support within the organization. The managers quickly came on board because BATON obviously offered them a way to better manage their processes and obtain data when and where needed. The next target group was the in-house IT specialist, because the consultants needed access to systems and data controlled by these people. In addition, the IT specialists would have to be the long-term custodians of BATON after the consultants leave. With assistance from the in-house IT specialists, the consultants expected to complete the implementation within months.
A Case of an IT-Enabled Organizational Change Intervention 149
“Using trees, it enables managers and research scientists to conceptually design logical structures that automatically generate the necessary Java code, coordinate with relational databases, and work with directory services with the goal of building a complete application.” “No IT background is needed for managers and research scientists. With only limited assistance from IT specialists, managers and research scientists will be able to layout basic structure of an application within a few days, and by a week, they can incorporate a complete set of complex logical elements.” “Such elements will then constitute the architecture of a new application in BATON, within weeks; a resulting application can be built and tested.” During the presentation, there was some skepticism, but once BATON was demonstrated the business managers were generally encouraged by the notion of what the technology can do and how it can do it. The consultants also had a few managers actually design a simple process structure after the presentation. This exercise helped to greatly reduce any remaining skepticism. Within a few days of preliminary training, managers were prepared to readily accept the technology and facilitate their part in implementing it. The consultants felt very positive at this point in time and believed they had an important first victory toward disseminating a positive attitude toward the intervention. As a result, the consultants anticipated a smooth transition to the next step of the process — gaining cooperation with the in-house IT specialists to set up a pilot infrastructure for the new system. This anticipation seemed reasonable because, after all, the in-house IT specialist would be managing the new technology and should readily appreciate its advantages.
Current Infrastructure & Practices in the IT Department
identify the root cause rather than a symptom. For example, if security needs to be heightened, IT builds a new firewall (or firewalls) to deny malicious access. The problem again is that there is no overall security strategy, just band-aids. At least we never noticed anything that explicitly verbalized or documented. Servers are everywhere with no seeming strategy for coordinating IT resources. “There is a method to the madness, I suppose,” said one of the consultants in an interview with us, “but it was not possible for us to determine their IT security, network, database or other management strategy because they are either not documented as such or they do not exist at all.” Moreover, the culture of the IT department was such that “you do what you have to do to make it work.” There was no standard in terms of system development and data access. Each IT specialist developed and controlled a piece of a stand-alone application as his or her own property. Decisions about which additional applications needed to be built and how they should be built were usually made separately by individual owners. One of the consultants told us that “[the CEO’s] goal with BATON is to reduce these ad hoc applications. Some of these applications may do the same thing, but are never shared because nothing is integrated.” Also, the ability of business managers to obtain access to project data depended mostly on their relationship with individual owners. That is, personal connections with system administrators and developers determined who got what data, rather than access to data being determined by a general data access policy. Since the organizational culture from the past few decades was family-oriented, administration of the IT department from top management was relaxed. As a result, IT specialists had great power over how they operated their supporting functionalities. Furthermore, IT specialists, guarded by techno-babble (technical jargon), were able to easily shield themselves from any attempts to question their practices or motives in order to defend their turf. According to one consultant, “since most managers do not know IT in any detail, it is not difficult to SCARE people away from potentially poor practices!” The consultants’ perception was that technology intimidation was used as a defense mechanism because business managers are not normally IT literate and are thereby easily intimidated. One of the consultants reflected that “when the company was small, this [lack of macro management] was probably not a problem, but the tremendous growth of URF in the past five years has made it almost impossible to operate IT support in this way.” In spite of the discovered poor IT practices, however, the consultants were still confident that implementation was possible: “By implementing this new IT-project, we hope to change the existing IT infrastructure and make it into an integrated one. Also we anticipate that there will be a good chance for us to bring in a new paradigm of integrated and coordinated business practices.”
A Case of an IT-Enabled Organizational Change Intervention 151
It seemed to the consultants that the in-house IT people simply did not care about the project. It also seemed that the IT people were not willing to carry out their given responsibilities to make the project a success. In fact, the assistance that the consultants expected from the IT department turned out to be resistance: IT controlled all the databases and systems. As a result, we had to go through the DBA to get access to the database and subsequent access to the data inside the database. However, access was not easily forthcoming. It literally took a month for us just to get an account on the real system. Actually, we were never really sure that the account that we were given was really on the real system. We suspected that it was a dummy account with non-production data. Of course, this set us back months because we had to figure out what was going on. It seemed that at almost every step the consultants took to move the project forward, the IT department induced obstructions of some kind. “We had the same problem with network security. To connect to a database server, we asked the network administrator to open a port for us. Again it took weeks for us to really get one.” When the consultants needed to prototype the new system somewhere, they again became frustrated: “we needed a machine to host our system, but we were turned away because our project was not included in their routine operation.” In spite of enormous efforts expended by the consultants, the project was not making progress with IT. Unable to push IT forward, the consultants turned to the CEO for support and hoped that he would help the effort. According to one of the consultants, however, the CEO was ambiguous in answering their request. “I think, although we were delegated by the CEO to lead the project, we didn’t really have the power to push IT in any real direction. Hence we could not make change which was crucial to implement the project.” One of the consultants also noted: “The CEO shared the vision but didn’t actively help us.” Although business managers had sponsored the project, they did not feel that it was their responsibility to push IT to change. Without a supporting structure to facilitate the change, the consultants felt alone and powerless to overcome resistance from IT. “We really wished that people from all levels had joined us and to create an environment that would pressure IT [for change]. To date, this hasn’t happened.” By early 2003, what really concerned the consultants was that they were losing sponsorship from business managers. It was obvious that the new system could not be built and tested for production without costing another year in implementation time. The consultants revealed to us that what they promised in terms of project timelines was not being met. Having perceived the problems in implementation in IT, business managers began pulling back, doubting that the technology would really work. As obstacles to the IT implementation continued to mount, the consultants started to realize that the project was facing serious challenges and that they were trapped between the vision and the reality. Powerless and helpless, the consultants noticed that their enthusiasm was fading.
A Case of an IT-Enabled Organizational Change Intervention 153
project anyway. The programmer concluded, “… basically there is no lead on this project now.” Admittedly, the consultants, reflecting upon their experience with the IT department, commented that they had not been consistent in educating across all groups concerning the new technology and the potential benefits to the organization in the long term. Neither had they made as much effort as they should have to rally sponsorship from IT people and prepare them for the intervention. “I guess we did not spend as much time and energy with the IT department as we did preparing business managers for the new technology. We could have spent more time with IT prior to pushing for implementation.” The consultant went on to say that “… without laying the groundwork for a change at the IT level, we underestimated the difficulties in implementing the project, [and] were unable to make the intended change to a new paradigm within IT.” The reason given by the consultants was that the tool is really for business managers, not IT. However, “IT is central to the plan. We should have anticipated this. We didn’t mean to underestimate IT. We just thought that they would do as directed by the CEO.”
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
The major challenge facing successful implementation of BATON is the mismatch between the legacy IT culture within URF and the paradigm shift inherent in the novel technology. Adding to this challenge is the imperative of effectively managing the change process, particularly resistance to change. Unfortunately, URF management never recognized the urgency to systematically re-examine the change intervention. The next subsection provides additional analysis of the case to help readers understand the critical problems facing the organization so as to develop a more informed plan of rectification.
Historically, the culture at URF was rooted in that of a small, family-owned business. With fewer contracts and grant projects, the level of managing IT support in business process management was relatively low. Such low levels of control on the IT department and a lack of an overall IT strategy from top management made the IT department a selfindulgent (and relatively independent) entity that possessed undue power in controlling processes. This translated into an inability to share data across independent systems, and created practices based upon personal connections rather than standard IT procedures and policies. As the organization expanded in size and number of contracts over the past decade, the existing IT culture did not fit. Instead, the expansion of the organization as well as the changing business environment demanded increased strategic planning of overall process management, efficient utilization of IT resources, and standardized IT operations. That is, top management believed that a high level of strategic control on IT would be necessary to match the continued expansion of URF. To illustrate, the new technology (BATON) allows business managers to implement their own processes without direct interference from IT. Implementation of BATON
induces a radical departure from the existing culture within IT. That is, IT was used to dictate how information is to be supplied to processes rather than managers dictating how and when they need information to support their businesses. With BATON, IT actually has to do less work because they only have to translate the managementestablished process into the system infrastructure. However, this also implies that IT will no longer be able to control the processes to the extent that they had in the past. Furthermore, a process management system built with BATON technology is, by default, centralized and shared so that it can be used across all levels of the organization to meet disparate needs. The artifactual boundaries set by individual IT owners are thereby broken. “Do me a favor” requests are replaced by standard IT procedures and policies if BATON is successful. This new paradigm contrasts significantly with the existing nonstandardized culture and practices within IT and thereby demands drastic changes in policies, procedures, and attitudes. The mismatch (between nonstandard IT practices and BATON requirements), however, had not been fully recognized by top management during planning of the change intervention. Moreover, top management (and the consultants) underestimated the challenges (change management issues) of implementing the new technology in a provincial IT culture. Hence, top management faces a dilemma. They must reconcile the mismatch to save the project from failure by creating a better estimate of crucial change management issues as they relate to the IT culture. Therefore, the reader should begin thinking about ways that URF can reconcile this mismatch. In addition, the reconciliation should take into account the following change management issues that are still hindering progress. •
Communicating/sharing the vision of change: As the case revealed, there was insufficient energy from top management to communicate and promote the vision to lower levels of the organization (i.e., programmers and other IT specialists). Further, the CEO’s vision was well established, but it did not include specific objectives and plans to guide realization of the vision. Only informed by an abstract vision, organization members had limited understanding of how the change initiative would really affect them. This was clearly demonstrated by the fact that IT specialists were unaware of the purpose of BATON when it was first implemented in their department. As a result, they did not buy into the project (from the beginning), and as change unfolded, their resistance to the change escalated. The absence of a concrete and consistent articulation of the change vision, that should be communicated and shared by organizational members, created an early obstacle to a successful change intervention. Managing the change process: In spite of the fact that the external consultants were hired to lead the implementation of BATON, they were seen by organization members as outsiders with no influence or power. Further, responsibilities for enacting change were not clearly assigned to those involved in the change (i.e., business managers, PIs, in-house IT specialists). Without such clear responsibilities, the normal management structure was not sufficient to support the change effort given that managers are already busy. As indicated from analysis of the case, consultants lost political sponsorship from other actors (i.e., managers and
A Case of an IT-Enabled Organizational Change Intervention 155
PIs) to a great extent in that they were unable to overcome the resistance they encountered when attempting to bring IT into the change effort. This insufficient management of the change process has contributed greatly to the problems encountered with BATON implementation within the IT department. Indeed, researchers have purported that “it is not the results management is managing, but the processes that achieve results” (Jick, 1993, p. 171). Resistance to change: It seems apparent that the CEO took a technological imperative perspective in attempting to realize his vision with respect to intellectual property protection and process management. That is, he and other top management implicitly assumed that implementing BATON would automatically enable expected changes in work routines, information flows, and performance. While such assumptions appear to be reasonable for business managers because the benefits are apparent, they fail when dealing with IT department resistance because it is more difficult for IT people to understand how such change benefits them. The IT department was used to controlling business processes, data, systems, and was seldom challenged by management to change such practices. As a result, it was difficult to convince them that BATON offered any real benefits because management neglected the existing IT culture developed over the years. As the change process unfolded, IT was pressed to rethink almost every aspect of their culture, and a sense of being questioned about their current practices emerged among IT specialists. As such, IT was immediately defensive about BATON and resisted because they wanted to maintain their comfortable way of life. In contrast, business managers were not as deeply affected (in a perceived negative sense) as their IT counterparts. Thus business managers were less resistant to changes brought about by BATON. The challenge facing top management concerns what can be done to be more proactive in diffusing IT resistance. 1
Armenakis, A.A., & Bedeian, A.G. (1999). Organizational change: A review of theory and research in the 1990s. Journal of Management, 25(3), 293-315. Beckhard, R., & Harries, H.R. (1987). Organizational transactions (2nd ed.). MA: AddisonWesley. Beer, M., Eisenstat, R.A., & Spector, B. (1990). Why change programs don’t produce change. Harvard Business Review, 68(6). Berney, M. (2003). Transition Guide: How to Manage the Human Side of Major Change. Washington, DC: Federal Judicial Center. Creswell, J.W. (1998). Qualitative Inquiry and Research Design: Choosing Among Five Traditions. London: Sage Publications. Garvin, D.A. (1998). The process of organization and management. Sloan Management Review, 39(4), 33-50. Graves, P., & Rosenblum, J. (1987, December). Rolling out the vision. OD Practitioner. Jick, T.D. (1990). The recipients of change. Harvard Business School Case N9-491039.
Jick, T.D. (1993). Managing Change: Case and Concept. Columbus, OH: The McGrawHill Companies. Markus, M.L., & Benjamin, R.I. (1996). Change agentry — The next IS frontier. MIS Quarterly, 20(4). Oden, H.W. (1999). Transforming the Organization: A Social-Technical Approach. Westport, CT: Greenwood Publishing Group. Orlikowski, W.J. (1994). The contradictory structure of systems development methodologies: Deconstructing the IS-user relationship in information engineering. Information Systems Research, 5(4). Orlikowski, W.J. (1996). Improvising organizational transformation over time: A situated change perspective. Information Systems Research, 7(1). Richards, D., & Engle, S. (1986). After the vision: Suggestions to corporate visionaries and vision champions. Transforming leadership: From vision to results. Alexandria, VA: Miles River Press. Tichy, N.M., & Devanna, M.A. (1986). The Transformational Leader. West Sussex, UK: John Wiley & Sons. Yin, R.K. (1994). Case Study Research: Design and Methods. Thousand Oaks, CA: Sage Publications.
Names of the organization, its parent organization, units, and members have all been disguised.
Up in Smoke: Rebuilding After an IT Disaster Steven C. Ross, Western Washington University, USA Craig K. Tyran, Western Washington University, USA David J. Auer, Western Washington University, USA Jon M. Junell, Western Washington University, USA Terrell G. Williams, Western Washington University, USA
On July 3, 2002, fire destroyed a facility that served as both office and computer server room for a College of Business located in the United States. The fire also caused significant smoke damage to the office building where the computer facility was located. The monetary costs of the disaster were over $4 million. This case, written from the point of view of the chairperson of the College Technology Committee, discusses the issues faced by the college as they resumed operations and planned for rebuilding their information technology operations. The almost-total destruction of the college’s server assets offered a unique opportunity to rethink the IT architecture for the college. The reader is challenged to learn from the experiences discussed in the case to develop an IT architecture for the college that will meet operational requirements and take into account the potential threats to the system.
Western University (WU) is a public, liberal arts university located on the west coast of the United States. The university’s student enrollment is approximately 12,000. WU focuses on undergraduate and master’s level programs and is comprised of seven colleges, plus a graduate school. WU receives much of its funding from the state government. The university has earned a strong reputation for educational quality, particularly among public universities. In its 2003 ranking of “America’s Best Colleges,” US News & World Report ranked Western University among the top 10 public master’sgranting universities in the United States. The university community takes pride in WU’s status. According to Dr. Mary Haskell, President of WU, “Continued ranking as one of the nation’s best public comprehensive universities is a tribute to our excellent faculty and staff. We are committed to maintaining and enhancing the academic excellence, personal attention to students and positive environment for teaching and learning that has repeatedly garnered Western this kind of recognition.”
College of Business Administration
The College of Business Administration (CBA) is one of the colleges at WU. CBA’s programs focus on junior and senior level classes leading to degrees in either Business Administration or Accounting. In addition, CBA has an MBA program. About 10% of the students at WU (1,200 FTE) are registered as majors in the CBA. Each year, CBA graduates roughly 600 persons with bachelor’s degrees and about 50 with MBA degrees. CBA has four academic departments — accounting, decision sciences, finance and marketing, and management — each of which has about 20 full-time or adjunct faculty members and an administrative assistant. Other academic and administrative units for the college are the college office, three research centers, and the CBA Office of Information Systems (OIS) — about 20 persons total.
Organizational Structure of Information Systems at Western University & CBA
The organizational structure of information technology (IT) support services at WU includes both centralized and decentralized units. Figure 1 shows a partial organizational chart for WU that depicts the different groups and personnel relevant to the case.
Figure 1: Partial Organization Chart for Western University M a ry H a s ke ll Pre sid e n t, W U
W illia m F ro st Pro vo st
Bo b O ’N e il VP, Bu sin e ss & F in a n cia l Affa irs
Ke n Bu rro w s VP, O ffice o f In fo rm a tio n a n d T e c h n o lo g y Se r vice s
Pe te r Ja m e s D e a n , C o lle g e o f Bu s in e ss Ad m in istra tio n
Bill W o rth in g to n D ire cto r o f In fo rm a tio n Sys te m s
Sa m M o s s C h a ir C BA T e c h C o m m itte e
D o n C la rk M anager of In fo rm a tio n T e c h n o lo g y
The 23rd Street facility contains a secure, fire-protected, emergency power equipped server room. The OITS main offices and US are located in Thompson Hall, while ITS and AS are located at 23rd Street. The key server computers that are operated by OITS are summarized in Table 1.
Table 1: Server Computers at Western University and CBA Category Comments Operating (see note) System WU OITS Academic Support servers. (List does not include rd Administrative Support servers.) All at 23 St. Location. Kermit Netware Production Faculty accounts 5.1 Bert & Ernie Netware Production Student accounts (two machines) 5.1 Oscar Win NT 4.0 Production Course support Grover UNIX Production Web pages CBA servers at the time of the fire. All but Iowa were located in MH 310 and destroyed in the fire. Wisconsin Netware Production Faculty and staff file 5.1 and print services Missouri Netware Production Student file and print 5.1 services California Linux Production Restricted file services Jersey Win NT 4.0 Production Web server Maryland Win 2000 Production CBA web site and SQL DBMS server Alabama Win 2000 Production Tape backup – Windows servers Massachusetts Netware Production Tape backup – 5.1 Netware servers Indiana Linux Production Help desk Dakota Win 2000 Academic Web server Washington Win 2000 Academic SQL DBMS server Carolina Netware Academic File and print 5.1 services Virginia Win 2000 Academic Web server, SQL DBMS server Colorado Linux Academic E-mail services Iowa Win 2000 Research & Web server, SQL Development DBMS server Server Name
Bill Worthington, who had been in the position for over five years. Worthington had extensive operating experience with a number of technologies and operating system environments. In early 2001 CBA hired Don Clark to serve as the OIS group’s second full-time employee. Clark’s title was manager of Information Technology, and his job was to manage operations for the OIS group. Clark knew the people and information systems of CBA very well, since he had worked under Worthington the previous two years before he graduated from CBA with an MIS degree.
CBA’s Technology Committee
CBA has a Technology Committee that provides advice to the dean and OIS concerning a variety of matters involving the planning, management, and application of technology in CBA. At the time of the case, the committee was chaired by an MIS professor named Sam Moss. Other members of the committee included other MIS faculty and representatives from each of the CBA departments — usually the most technologyliterate persons in those departments.
SETTING THE STAGE
On the evening of July 3, 2002, a fire destroyed many components of the information systems infrastructure for CBA, including virtually all of the server machines for CBA’s computer network. To help provide a context for understanding the issues associated with IT disasters, this section provides an overview of the topic of disaster recovery, followed by a description of the fire disaster incident at CBA and a summary of the key people and places involved with the disaster and subsequent recovery process.
Information Technology Disasters & Disaster Recovery
Table 2: Most Frequent Types of IT Disasters (Adapted from Lewis et al., 2003) Disaster Category Natural Event - e.g., Earthquake, severe weather IT Failure - e.g., Hardware, Software Power Outage - e.g., Loss of power Disruptive Act - e.g., Intentional human acts (e.g., bomb) Water Leakage - e.g., Pipe leaks
% of Disaster Events (out of 429 events)
Days of Disruption (minimum – maximum)
0 – 85
1 – 61
1 – 20
1 – 97
0 – 17
1 – 124
1 – 204
Fire - e.g., Electrical or natural fires IT Move/Upgrade - e.g., Data center moves, CPU upgrades
exposed many businesses to vulnerabilities in their ability to handle a disaster (Mearian, 2003). Given the importance of IT operations to contemporary organizations, it is critical for organizations to be prepared. Unfortunately, industry surveys indicate that many organizations are not as prepared as they should be (Hoffman, 2003). In particular, small and medium-sized organizations tend to be less prepared, often due to limited IT budgets (Verton, 2003). Disaster planning can require significant time and effort since it involves a number of steps. As described by Erbschloe (2003), key steps in disaster planning include the following: organizing a disaster recovery planning team; assessing the key threats to the organization; developing disaster-related policies and procedures; providing education and training to employees; and ongoing planning management.
The Night of July 3, 2002
The fire disaster event at CBA occurred on the night of Wednesday, July 3, 2002. During this evening, the WU campus was quiet and deserted. The following day was Independence Day, which is a national holiday in the United States. The July 4th holiday is traditionally a time when many people go on vacation, so a large number of WU faculty and staff who did not have teaching commitments had already left town. The library and computer labs on campus were closed at 5 p.m. in anticipation of the holiday, and custodial staff took the evening off. The quiet mood at the WU campus on the evening of July 3rd changed considerably at 10:15 p.m. when a campus security officer spotted large amounts of smoke emerging
from Mitchell Hall. The officer immediately reported the incident to the fire department. Within minutes, Mitchell Hall was surrounded by fire trucks. The fire crew discovered that a fire had started on the third floor of Mitchell Hall. Fortunately, the fire fighters were able to contain the fire to an area surrounding its place of origin. Considering that Mitchell Hall did not have an automated smoke alarm system, the structural damage to the building could have been much worse. As noted by Conrad Minsky, a spokesman for the local fire department, “If it hadn’t been for [the security officer’s report of the smoke], the whole building probably would have burned down.” Unfortunately, the room where the fire started was Mitchell Hall 310 (MH 310), the central technical office and server room for CBA — where virtually all server hardware, software, and data for CBA were located. The dean of CBA, Peter James, and the Director of CBA’s OIS, Bill Worthington, were called to the scene of the fire late on the night of July 3rd. Their hearts sank when they learned that the fire had started in MH 310 and that it had completely devastated the contents of that room. Due to the fire department’s protocols involving forensic procedures at a disaster site, neither James nor Worthington was initially allowed into the area near MH 310. However, based on what they could see — numerous fire fighters in full gear coming in and out of the Mitchell Hall building — they could imagine a grim situation inside the building. There wasn’t much that James or Worthington could do that evening, other than wonder if anything could be saved from the room and what it would take to recover. James found a computer on campus from which he could send an e-mail message to the CBA staff and faculty, most of whom were blissfully unaware of the disruption that was about to happen in their lives. The e-mail sent by Dean James is shown in Figure 2.
Figure 2: E-Mail Message from Dean James From: Peter James Sent: Thursday, July 04, 2002 1:03 AM To: Faculty and Staff of CBA Cc: Mary Haskell (President); William Frost (Provost) Subject: FIRE IN MITCHELL HALL Importance: High You may have heard that there was a fire in Mitchell Hall that started sometime prior to 10 PM Wednesday evening. It is now 12:55 Wednesday evening, and Bill Worthington and I have been there to review the situation. The fire appears to have started in MH 310, which is the server/ technology room. The room and all of the contents have been destroyed according to the fire personnel, and we were prohibited from entering that part of the building pending an investigation by the fire inspector. There is substantial smoke damage to all of the third floor, and to a considerably lesser extent the fourth floor as well. Vice President O’Neil, who was also present along with his crew and Western's Police Chief, indicated that clean-up will start at the latest Friday morning. There was little to no damage on the lower floors, although considerable water had been pumped into MH 310 as well as the seminar rooms, and possible water damage had not yet become evident. I am requesting that you do not come to the office on Friday unless absolutely necessary. Third floor offices will be in the process of being cleaned, and some of them may not be habitable because of the smoke damage. An assessment will be completed over the next few days. We do not yet know what we will do regarding the loss of the servers and associated data and software. Peter
After the fire fighters managed to put out the fire and clear the area of smoke, the fire department’s forensic team closely examined the burn site. The extent of the damage to MH 310 and its contents was complete. According to the fire department spokesman Conrad Minsky, “It was cooked beyond belief. It was so hot in there that a lot of the [forensic] evidence burned … All we saw was just huge clumps of melted computers.” The extent of the damage is illustrated in Figure 3, which shows a picture of a MH 310 server rack shortly after the fire was extinguished. Based on the evidence available at the scene, the forensic team concluded that an electrical fire in one of CBA’s oldest server computers had started the blaze. Once the fire started in the server, it slowly spread to other servers in a slow chain reaction. Since the servers were not in contained fire-proof server racks, there was nothing to stop the initial server fire from spreading. Ultimately, the entire server room and all of its contents (including technical equipment, furniture, back-up tapes, and papers) were destroyed.
This section discusses the key activities and issues that confronted the members of the CBA IT staff and administration following the fire. As is the case with many disaster events, it took some time for the people involved to get a complete understanding of the extent of the disaster and its implications. Once the situation was assessed, a number of issues needed to be addressed to deal with the disaster. These issues included data recovery, the search for temporary office space and computing resources, rebuilding Mitchell Hall, and disaster funding. Figure 3: Computer Server Rack in Mitchell Hall 310 Following the Fire
Sam Moss, MIS Professor and Chair of the CBA Technology Committee, awoke on the morning of July 4th looking forward to a day of relaxation — a bit of work around the house capped off by a burger and a brew while he and his friends watched the civic fireworks show that evening. Little did he realize that the real fireworks had happened the evening before. Moss always checks his e-mail, even on a holiday. He was, of course, rather disturbed to see the message from Dean James about the fire in Mitchell Hall. Moss’s first reaction was to try to visit a Web site that he maintained on his experimental server, Iowa, which was located in his office. Much to his relief, the Web site responded and appeared to be working as usual. Moss also tried to access the CBA Web site, which he had developed with another colleague (Chanute Olsen), as well as files maintained on the CBA faculty data server. Neither of these last two attempts was successful. Upon first reading, the Dean’s message did not convey the full impact of the loss of systems to Moss, who hoped that the servers might be recoverable. Moss knew that the CBA office maintained tape backups of the directories containing the Web site, faculty, and student data. While he expected a few weeks of disruption, he anticipated that CBA’s IT operations would be in pretty good shape shortly. Moss was very familiar with the techniques for remote access to Iowa and other college and university systems (see Microsoft Corporation, 2004a, and Novell Corporation, 2004, for examples of remote access software). Concerned that access to his office, where Iowa was located, might be limited for a few days, Moss immediately copied the backup files of his database projects and those of his students to his home machine. He also copied .ASP (active server page) files and any other documents that would be difficult to replace. Moss contacted Dean James later in the day via e-mail. Moss made his remarks brief: “Iowa appears to be OK, we can use it as needed as a temporary server.” … and … “Would you like me to convene the College Technology Committee to provide advice for the rebuilding of our technology support?” James’s response was that he appreciated the offer of Iowa, and yes, he wanted advice on “How should we rebuild?” The following sections summarize what was learned and what was done by key players during the month after the fire. These items are not in chronological order; they are a summary of what Moss learned as he prepared to address the issue of rebuilding the college’s IT architecture.
Determining the Extent of Data Loss
The first question, of course, concerned the state of the data back-up tapes of the CBA servers. Unfortunately, the situation regarding the back-ups was discouraging. While nightly data back-ups had been made, the tapes were stored in MH 310. Copies were taken off-site every two weeks. At the off-site location, the most recent copies of data and documents from Wisconsin and Missouri were dated June 16 — the last day of Spring Quarter. All of the more recent backup tapes were located on a shelf near the server machines and had been destroyed in the fire. On Maryland, the static pages and scripts to create dynamic pages (e.g., a list of faculty drawn from the database) of the CBA
Web site had been backed up, but the SQL Server database (which included data as well as stored procedures and functions) on that machine had not been backed-up. According to Worthington, the college was waiting for special software needed to back- up SQL Server databases. In addition to the back-up tapes, there were several other sources of backed-up data; however, these sources were rather fragmented. For example, many professors worked on the files for their university Web sites from home and used FTP (file transfer protocol) to copy the updated files to the Web server. Their home copies were, in effect, back-ups. Moss had made some changes to the CBA Web site pages the evening of July 2nd and had copies of those pages on his home system. The Iowa server in Moss’s office had been used as the test bed for the database portion of the Web site; therefore it contained the structure of the database, but neither the most recent data nor the stored procedures and functions. Although no desktop machines, other than those in MH 310, were burned in the fire, they were not readily available immediately after the event. Smoke and soot permeated the building. There was a concern that starting these machines, which were turned off at the time of the fire, might lead to electrical short circuits and potentially toxic odors and fumes. The decision was made to clean these machines before restarting them.
Recovery & Repair of Smoke-Damaged Systems
Within 48 hours of the fire, the university contracted with BELFOR International to help with clean-up. BELFOR specializes in disaster recovery and restoration. Examples of high profile disasters that BELFOR responded to in 2003 included the major wildfires in California and hurricane Isabel in the eastern United States (BELFOR, 2004). With extensive experience with disaster recovery and regional offices in Seattle, Portland, and Spokane, BELFOR was well-positioned to respond to the Mitchell Hall fire incident. BELFOR’s areas of expertise included restoration of building structures, office equipment, and paper documents. Based upon BELFOR’s initial analysis, none of the computer systems, nor any of their components, from MH 310 was deemed recoverable. On the other hand, all other computers in the building escaped heat and water damage and could be “cleaned” and continued in service. These machines contained valuable data and were set up with individual users’ preferences. Recovering them would be a major benefit. The cleaning procedure for computers was a labor intensive process that involved immersion in a special water solution followed by drying and dehumidification. Systems units, monitors, and most printers could be cleaned. Keyboards, mice, and speakers were cheaper to replace than to clean. Most of these systems were cleaned in the 30 days following the fire and then stored until the reopening of the building in mid September. The college identified a few critical systems, based on the data they contained, for expedited cleaning and return to service.
Sources of Funding for Disaster Recovery
The cost of cleaning Mitchell Hall and rebuilding the computer system was significant. Estimates rose dramatically during the period that followed the fire. The initial estimate was $750,000. However, ongoing investigation of the damages and building
contamination indicated that the total cost would be much higher. By the end of July, the university issued a press release indicating that the costs required to replace equipment and make Mitchell Hall ready for classes would be approximately $4 million. In fact, the final cost was about $4.25 million. The repair costs were high because of smoke and water damage. The ventilation system in Mitchell Hall had used fiberboard ducts instead of metal ducts, and these had absorbed smoke and particles during the fire. The ducts could not be cleaned and had to be completely replaced on the top three floors. This required new ceiling tile installations to accommodate the new locations of the ventilation ducts. Carpets on these three floors also had to be replaced. MH 310 was completely gutted and rebuilt starting with the metal studding in the walls. The remainder of the third floor had the drywall walls torn out and replaced. In addition, the electrical and network infrastructure was rebuilt, which required completely rerunning the associated wires back to an electrical closet on the third floor. BELFOR handled the cleaning for all computers and other electronic equipment that was of sufficient economic value to justify the work. These cleaning costs totaled more than $100,000. The replacement costs for computer equipment and software that had been lost in the fire was approximately $150,000 — an amount that did not include the labor required to install and test the hardware and software. The cost of the recovery was paid from two sources. WU had a specialized Electronic Data Processing (EDP) insurance policy to cover hardware losses. This policy provided up to $250,000 (on a “replacement cost” basis) less a $25,000 deductible. The other source of funds was a self-funded reserve set up by the state. The amount of money provided from this fund was determined by the State Legislature. President Haskell invited several legislators to campus and conducted first-hand tours of Mitchell Hall so that they could view the damage. Based on their review of the situation, the State Legislature provided a sufficient amount of funding to WU to rebuild Mitchell Hall and to replace all the hardware and software that had been destroyed. In a press release, President Haskell stated “We are very pleased with the timely and supportive action from legislative leaders and the Governor’s office.”
Office space on the WU campus is at a premium. Fortunately, there was a small area in WU’s administration building, Thompson Hall, which had been recently vacated. This office, with perimeter cubicles surrounding a large open area, became the temporary location of CBA’s department offices. Each department chair and each department secretary had a workspace with a desk, phone, and limited storage for files and books. The offices had connectivity to the campus network. An open area was converted to a “bull pen” for faculty use and small group meetings. Because the incident happened in the summer, it was relatively easy to relocate the classes. During the school year, university classrooms are at 100% utilization during midday class hours but in the Summer Quarter, less than 50% utilization is common. The only class sessions lost were for the few classes scheduled to meet on Friday, July 5th. Classes resumed as usual on July 8 th, although in different rooms. Those that had been assigned to meet in the Mitchell Hall computer lab were relocated to another lab on campus.
Computer Hardware Resources Available for Immediate Use
For the immediate term, CBA was able to make use of 15 computers that had been received but not yet deployed in Mitchell Hall. These machines had been ear-marked as replacements for older systems and for recently hired faculty due to arrive in the fall. Fortunately, these systems were being held in the 23rd Street facility. They were quickly configured and delivered to the offices in Thompson Hall. The MIS program at Western had a set of 20 notebook computers that were used for classroom instruction as part of a “mobile computer lab.” At the time of the fire, they were stored in a locked cabinet in a second floor room in Mitchell Hall. These computers were examined by the health inspectors and deemed safe to use (i.e., cleaning was not needed). These machines contained wireless network cards and were configured with the Microsoft Office suite. No MIS classes were scheduled to use this lab during the summer, so these computers were signed out to faculty who did not have sufficient capacity in their home systems. The combination of new systems, the notebook computers, and individuals’ home systems was sufficient to equip all staff and faculty with adequate workstations. The loss of servers was much more problematic. The college lost every server it had with the exception of Iowa — the research and development server running Windows 2000 Server and MS SQL Server 2000. Iowa was located in Moss’s office on the third floor. Although this location was less than 100 feet from the scene of the fire, Iowa had been relatively protected in Moss’s office. Because Iowa had been operational continuously since the start of the fire and had not shown evidence of any problems, Moss and Worthington decided to keep it in operation (not clean it). Iowa was relocated to a wellventilated area in the Thompson Hall offices.2 The university maintains many servers: a series of Novell systems used for student and faculty data, a server running Windows NT devoted to academic use, and a UNIX server used for most university, faculty, and student Web pages. Worthington was able to negotiate for space on one of the Novell servers to hold the data that had been stored on Wisconsin and Missouri.
Recovering Files from Novell OS Servers
Immediately following the fire, Worthington obtained temporary workspace in the 23 rd Street building and set out to recover the data from the June 16th tapes. He quickly ran into a severe stumbling block. CBA used digital tape equipment that was newer than the drives attached to Kermit, WU’s Novell server. Fortunately, a tape drive in the correct format was attached to Grover, WU’s UNIX server. To recover data from Wisconsin and Missouri, it was necessary to copy the tape contents to a disk on Grover; then write the data to tapes in the format of the drives attached to Kermit, and finally restore from those tapes. Most of the data was available within a week, thanks to Worthington’s diligent efforts.
this data took much longer than the Novell files and was not completed until July 31st. Worthington was able to recover portions of the CBA Web site which had been backed up on June 16th. The recovered data consisted of those folders that professors and department chairs had prepared for their Web sites. As the files were recovered, they were copied onto the Iowa server. Unfortunately, the database data for the CBA WWW site, including the views and stored procedures used to execute queries, had not been backedup to tape and were lost. Sam Moss and Chanute Olsen set about to reestablish the site as quickly as possible. Once Iowa was operating in its new location in Thompson Hall, they arranged for the DNS entry for the Web site to point to Iowa’s IP address. They quickly rebuilt portions of the database and copied as much data as they could from other sources. On his home computer, Moss had current copies of many of the Active Server Page (ASP) files that were used to extract data and format the data for the page displays on CBA’s WWW site. Moss immediately loaded these files onto Iowa. Unfortunately, Moss did not have copies of ASP files on the Windows server that had been maintained by others. These files ultimately had to be recreated from scratch. Much of the information displayed on the college Web site was drawn from a database. Maintenance of the database data was done by the department chairs and secretaries. Once Iowa was fully operational, these persons had to re-enter data about their departments. Moss and Olsen contacted the professors who had sites on the college Web site. For those who had home back-ups of their data, it was easy to recreate their sites on Iowa. Those who did not have a back-up were forced to wait until files were copied from the back-up tapes. By the first of August, all Web sites had been restored and most of the database entries had been recreated.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
During the period following the fire, Moss thought about the challenges that the college would face once the immediate tasks of restoring data and temporary operations were accomplished. He knew that a major task would be rebuilding the College’s IT operations. Moss would need to work with his College Technology Committee through the months of July and August to come up with recommendations. As Moss pondered the situation, he realized that there were many unanswered questions, which he organized into two categories: •
Planning for IT Architecture: What system applications and services were needed at CBA? What were the hardware/operating system requirements for these applications? How should the applications and services for CBA be allocated across servers? How many servers should CBA acquire? Where should the servers be physically located, and under whose control? Assessment of Threats: What were the disaster-related threats to the CBA information systems? Would it be possible to design a CBA IT architecture that would minimize the impact of disaster and facilitate disaster recovery?
Planning for the Information Technology Architecture
Issues concerning IT architecture were at the top of Moss’s list. As discussed by Seger and Stoddard (1993), “IT architecture” describes an organization’s IT infrastructure and includes administrative policies and guidelines for hardware, software, communications, and data. While one option for CBA would be to replicate the architecture that had been destroyed, Moss did not believe that this would be the best way to go. The array of servers and systems before the fire had grown on an “as needed” basis over time, rather than guided by any master plan. While the previous system had worked, Moss wondered if there might be a different, more effective IT architecture for CBA. Since CBA would need to rebuild much of its IT operations from the ground up in any event, Moss thought that it would be worthwhile to take this opportunity to re-examine the IT service requirements and the way that the requirements would be implemented. By doing so, it was possible that the college might adopt a new IT architecture. Moss was aware that an important first step in IT architecture planning is to determine the applications and services that will needed (Applegate, 1995). To kick off the IT architecture planning process, Moss sent an e-mail message to all faculty and staff on the morning of July 8th. The purpose of the message was to solicit input from the CBA user community regarding IT service requirements. The e-mail message is shown in Figure 4. By learning more about the desired IT applications and services, Moss and his committee would be able to determine the types of application software that could be used to provide the specified services. This information would be useful, as it would have implications for the types of operating systems that would be needed. Due to operational considerations, Moss anticipated that no more than one type of operating system would reside on any given server. After the services were identified and the operating systems requirements were determined, the next step would be to decide on the number and type of server machines. Based on input from Worthington and others, Moss knew that an “N-tier” type of IT architecture would offer useful advantages for CBA. An N-tier design (also called multi-tiered or multi-layered) would involve separating data storage, data Figure 4: E-Mail Message from Sam Moss From: Sam Moss Sent: Monday, July 08, 2002 9:43 AM To: CBA Faculty and Staff Subject: Thinking about CBA Server needs Importance: High Colleagues, As we rebuild from the fire, we have an opportunity to rethink our server configurations. The first step in such a process is a requirements analysis (we all teach it this way, don't we?). To that end, I would appreciate your input on the following questions: What are the data and application program storage needs for CBA? What service requirements do we have? Don't answer this question in terms of a proposed solution (i.e., "We need two Netware servers and one NT server under CBA control.") but rather in terms of what must be delivered, as independent of technology as possible, to our faculty, students, and external publics. --Sam Moss Chair, CBA Technology Committee
retrieval, business logic, presentation (e.g., formatting), and client display functions. One benefit of an N-tier design is that the development staff can focus on discrete portions of the application (Sun Microsystems, 2000). Also, it makes it easy to scale (increase the number of persons served) the application and facilitates future change because the layers communicate via carefully specified interfaces (Petersen, 2004). Although multiple tiers may reside on the same server, it is often more efficient to design each server to host specific parts of the application. For example, database servers perform better with multiple physical disks (see McCown, 2004). The use of more than one server can also complement security offered by network operating systems. For example, CBA provided file storage services for both faculty and students. Although these services could be hosted on a single machine, allocating the services on separate machines, and not allowing any student access to the “faculty” machine, would provide additional security for faculty files (e.g., student records, exams, faculty research). As Moss received responses to his e-mail message, he organized the responses in a table to help him and his committee analyze the requirements. As indicated in Table 3, Moss had organized his list of application and service requirements for CBA into different categories based on the type of server that might be used to host the services. A variety of different types of servers may be used in an N-tier architecture. Examples of some of the different types of servers and their functions include the following (TechEncyclopedia, 2004): •
• • • • •
Application Server: The computer in a client/server environment that performs the business logic (the data processing). For WWW-based systems, the application server may be used to program database queries (e.g., Active Server Pages, Java Server Pages). Database Server: A computer that stores data and the associated database management system (DBMS). This type of server is not a repository for other types of files or programs. Directory Services: A directory of names, profile information and machine addresses of every user and resource on the network. It is used to manage user accounts and permissions. File Server: A computer that serves as a remote storage drive for files of data and/or application programs. Unlike an application server, a file server does not execute the programs. Print Server: A computer that controls one or more printers. May be used to store print output from network machines before the outputs are sent to a printer for printing. Research and Development Server: A local name used at CBA to describe a computer that is used for developing and testing software and programs. A single R&D server often hosts multiple tiers because demand (i.e., number of clients served) is light. Tape Back-Up Server: A computer that contains one or more tape drives. This computer reads data from the other servers and writes it onto the tapes for purposes of backup and archive. Teaching Server: Another local name used at CBA to describe a computer that is used for educational purposes to support classroom instruction in information
systems courses (e.g., WWW development, database management, network administration). Web Server: A computer that delivers Web pages to browsers and other files to applications via the HTTP protocol. It includes the hardware, operating system, Web server software, TCP/IP protocols, and site content (e.g., Web pages and other files).
Assessment of Threats
It was clear to Moss that the College had not been well positioned to handle an IT disaster. Although many books on IT management include references to disaster planning (e.g., Frenzel & Frenzel, 2004), the college had been too busy handling day-today IT operations to devote resources to disaster planning. Moss resolved to ensure that the next disaster, if there was one, would not be as traumatic. As part of this process, Moss wanted to identify the threats to CBA’s IT system (Microsoft Corporation, 2004b). As suggested by Table 2, Moss knew that IT disaster threats can come from the physical environment (e.g., WU’s location makes it susceptible to both earthquake and volcanic activity), from machine failure, and from human causes. With the help of the technology committee, Moss set out to identify the primary threats.
Table 3: CBA IT Requirements Type of Server Requirements for Services/Applications
Suitable Software or Operating Systems
File Server 1. Individual directories and files (faculty and student documents) 2. Class directories and files 3. Homework and exercise drop-off 4. College documents and records Print Server 5. Printer queues for faculty and staff 6. Printer queue for student lab Database Server 7. Data to support CBA web site 8. Student Information System Data 9. Faculty Information System Data Application Server 10. Web site active server pages 11. Student and faculty info. systems Web Site Server 12. Static and dynamic pages for the college and departments 13. Individual faculty pages 14. Student club pages
Novell NetWare MS Windows Server UNIX and derivatives (e.g., LINUX)
Teaching Server 15. Web development class 16. Database management class 17. Enterprise resource mgmt. class 18. Network administration class Research and Development 19. Database management 20. Web site development Tape Back-up Server Tape backup of all servers Directory Services File and Print Services login (for Items 1-6 listed above) College Web server and database server login (Items 7-14, 19 & 20) Academic Course logins (Items 15-18)
Novell NetWare MS Windows Server MS SQL Server Oracle MySQL or another RDBMS MS IIS was previously used for item 10. Item 11 is a future project Static pages could be on any of several systems, currently on MS Windows and UNIX servers Dynamic pages work in conjunction with item 10 Classes 15-17 currently use Windows 2000 Server Class 18 uses Linux, Novell NetWare and Windows 2000 Server Current research all conducted on software that requires Windows Veritas Backup Exec IBM Tivoli Novell eDirectory (formerly Novell Directory Services – NDS) Microsoft Active Directory (AD)
Once Moss and his committee had generated the list of threats to CBA’s IT operations, he hoped to design the IT architecture in a way to minimize the impact of any future disasters and facilitate future disaster recovery. Toigo (2000) has pointed out that decentralized IT architectures, such as a N-tier design, involve special considerations for disaster planning. To help minimize the impact of disaster, Tiogo recommends that a decentralized system include high-availability hardware components, partitioned design, and technology resource replication. For example, having similar machines in multiple locations may provide an immediate source of back-up computing resources (as illustrated by the use of Iowa and Kermit after the CBA fire). Moss planned to arrange meetings with both OIS and OITS personnel to determine how CBA and WU could shape its IT architecture plan to minimize the impact of any future disaster.
Applegate, L. M. (1995). Teaching note: Designing and managing the information age IT architecture. Boston, MA: Harvard Business School Press. BELFOR (2004). BELFOR USA News. Retrieved May 30, 2004: http://www.belfor.com/ flash/ index.cfm?interests_id=41&modul=news&website_log_id=27802 Burd, S.D. (2003). Systems Architecture (4th edition). Boston, MA: Course Technology. Eklund, B. (2001). Business Unusual. netWorker, December, 20-25. Erbschloe, M. (2003). Guide to Disaster Recovery. Boston, MA: Course Technology. Frenzel, C.W., & Frenzel, J.C. (2004). Management of information technology. Boston, MA: Course Technology. Hoffman, M. (2003). Dancing in the Dark. Darwin, September 1. Lewis, W., Watson, R.T., & Pikren, A. (2003). An Empirical Assessment of IT Disaster Risk. Communications of the ACM, 46(9ve), 201-206. McCown, S. (2004). Database Basics: A few procedural changes can help maintain high performance. InfoWorld, February 20. Mearian, L. (2003). Blackout tests contingencies. ComputerWorld, August 25. Microsoft Corporation (2004a). Windows 2000 Terminal Services. Retrieved May 27, 2004: http://www.microsoft.com/windows2000/technologies/terminal/default.asp Microsoft Corporation (2004b). Microsoft Operations Framework: Risk Management Discipline for Operations. Retrieved June 1, 2004: http://www.microsoft. com/technet/ itsolutions/techguide/mof/mofrisk.mspx Novell Corporation (2004). Novell iManager. Retrieved May 27, 2004: http://www.novell.com/ products/consoles/imanager/ Petersen, J. (2004). Benefits of Using the N-Tiered Approach for Web Applications. Retrieved June 1, 2004: http://www.macromedia.com/devnet/mx/coldfusion/articles/ ntier.html Rhode, R., & Haskett, J. (1990). Disaster Recovery Planning for Academic Computing Centers. Communications of the ACM, 33(6), 652-657. Seger, K. & Stoddard, D.P. (1993). Teaching Note: Managing Information: The IT Architecture. Boston, MA: Harvard Business School Press. Sun Microsystems (2000). Scaling the N-Tier Architecture. Retrieved June 4, 2004: http:// wwws.sun.com/software/whitepapers/wp-ntier/wp-ntier.pdf
TechEncyclopedia (2004). Retrieved June 2, 2004: http://www.techweb.com/encyclopedia Toigo, J.W. (2000). Disaster Recovery Planning (2nd edition). Upper Saddle River, NJ: PrenticeHall. Verton, D. (2003). Tight IT budgets impair planning as war looms. ComputerWorld, March 10.
The facts described in this case accurately reflect an actual disaster incident that occurred at a university located in the United States. For purposes of anonymity, the name of the organization and the names of individuals involved have been disguised. In the two years since the incident, Iowa has operated continuously with no hardware problems. The monitor was cleaned and the mouse and keyboard were replaced, but the only cleaning the system unit received was a wipe of the outer surface. Iowa initially emitted a pungent smell, but that odor disappeared after two to three weeks.
Analyzing Student Learning Outcomes Assessment System
From Principles to Practice:
Analyzing a Student Learning Outcomes Assessment System Dennis Drinka, University of Alaska Anchorage, USA Kathleen Voge, University of Alaska Anchorage, USA Minnie Yi-Miin Yen, University of Alaska Anchorage, USA
The College of Business Administration (CBA) was housed within a public, midsized, urban university located in its state’s largest city. The mission of the university was to promote scholarship, and excellence in teaching, research, creativity, and service. Working towards this mission, the college’s faculty, administration, alumni, and community partners were dedicated to advancing the quality of learning and academic distinction of the university, while being actively engaged in using their talents and knowledge in service to their local and statewide communities. In recognition of their dedication and commitment to quality by AACSB International (Association to Advance Collegiate Schools of Business), the CBA was awarded its accreditation in 1995. The CBA was composed of six academic departments: accounting, business administration, computer information systems (CIS), economics, logistics, and public administration. It offered associate and bachelor’s degrees in each department, a certificate in logistics, a master’s degree in business administration (MBA), a master’s degree in public administration (MPA), and a Master’s of Science (MS) in logistics and global supply chain management. The college was also the home of several research, outreach, and economic development centers. These centers focused on social and economic research, international business, small business administration, regional economic policy and development, and international business education. The CBA served the local, state, and global communities by training and educating the workforce, by promoting and inspiring excellence in public, private, and non-profit management and related business disciplines, by providing professional assistance and advice to these organizations, and by conducting basic, applied, and pedagogical research. The CBA had more than 1,300 students of which approximately 15% were pursuing one of the two-year associate degrees or a certificate, 75% were pursuing four-year baccalaureate degrees, and 10% were in one of the three graduate degree programs. Nearly 50% of the student body was considered non-traditional, either working towards their degrees on a part-time basis, or attending classes solely for personal or professional development. Part-time students typically required six to seven years to complete a baccalaureate degree program. In 1987, the university was included in a state government-mandated consolidation that resulted in the merger of the regional community colleges with the university. One result of this merger was the partitioning of faculty into two distinct categories: faculty members from the community colleges were categorized as bi-partite faculty whose job responsibilities would include teaching and service, while faculty members from the original university were classified as tri-partite faculty whose responsibilities, in addition to teaching and service, would also include research. Faculty members have since unionized, with bi-partite and tri-partite faculty members represented by separate unions. The CBA had 46 full-time faculty members of which 12 were bi-partite and 34 were tripartite.
Analyzing Student Learning Outcomes Assessment System
of review was there an evaluation of courses with the purpose of evaluating and guiding instructional improvement, to reward the development of new courses, or to reward innovation in teaching and learning, nor was there any formal review of the consistency between the thoroughly reviewed CCGs’ objectives and outcomes, and those included in the course syllabi.
CASE DESCRIPTION Project Background & Initial System Analysis
Alexis was recently hired as an IT Manager for the CBA. She had been assigned the project of developing the Student Learning Outcomes Assessment System in May 2003. This was to be a highly visible project and, if developed and implemented successfully, it could help the college and possibly the university in a variety of ways including standardizing formats and providing version control of CCGs. The primary goals of the new system included: 1. 2. 3. 4.
Assure consistency between the college mission, the college-level program outcomes, and the individual course CCG learning objectives. Allow comparison of actual student outcomes to desired student outcomes. Provide continuous improvement in student learning through curriculum revision by providing reports on the outcomes assessment results. Increase operating efficiency and maintenance of AACSB accreditation by assisting in the documenting and tracking of curriculum changes throughout the curriculum review process.
Analyzing Student Learning Outcomes Assessment System
information, the student learning objectives, and the assessment results. They decided to modify the original database design and complete the logical design of the larger system which could ultimately include a Web-based user interface. It was at this point that they decided Alexis would need to be brought in to oversee the project.
(SDLC) versus those of several other alternatives, such as Rapid Application Development (RAD), Joint Application Design (JAD), Object-Oriented Development (OOD), and so forth. She pulled out her collection of System Analysis and Design books and articles for help in determining the advantages and drawbacks of each approach and to review different project lifecycle approaches, such as waterfall, iterative, spiral, prototype, and design-to-tools (Berardi & Stucki, 2003; Cleland & Ireland, 2002; Hoffer, George, & Valacich, 2001; Kendall & Kendall, 2004; Kloppenborg & Petrick, 1999; McConnell, 1996; Valacich, George, & Hoffer, 2003; Whitten, Bentley & Dittman, 2004). She did not want to restrict the project to one specific approach without having a thorough knowledge of all the available resources she might have access to once the analysis and design phases were complete. Theoretically, Alexis knew that she could use a combination of the above proven methodologies depending on the needs and the available resources and skills faced by the CBA (Johnson, 2000; Mylopoulos, 1999; Wang, 1996). After reviewing her texts, Alexis realized she had other problems. She decided to arrange a meeting with Kevin and Mike so they could brainstorm design ideas and risks. She started by expressing her concern to them that since no dedicated IT development function currently existed within the college, no standard development tools could be identified. The prototype that Mike and Kevin had developed utilized Microsoft® Access, but they all agreed that the new system should be designed using a client/server model for the database with a Web-enabled user interface. Kevin and Mike agreed with Alexis’ assumption that the database design would have to be scalable to allow for both database growth and the expected continuous design changes — including possible university-wide data requirements. Since it appeared that the learning outcomes assessments and curriculum reviews would continue to be in an iterative and on-going process of revision, the actual assessments used and the data collected would need to be able to take on a variety of forms over the years. For some stakeholders, the system would primarily be used as a data warehouse for both curriculum information and assessment results. However, Kevin pointed out that other members on the CAC had indicated that it would be desirable to keep historical data in the system so that at some point in the future, past students’ learning assessment results could be compared with current results using data mining techniques. Mike and Alexis agreed that historical data would most likely need to be preserved. Kevin then pointed out that one primary responsibility of the CAC was to provide oversight for a five-year curriculum review cycle. That is, each course would need to be reviewed, updated, and revised, if necessary, at least once every five years. The purpose for this review went beyond accreditation and assessment needs; these reviews would be conducted to ensure that the curriculum being delivered in the classroom was meeting current needs of both students and industry, and to ensure that all CCG document items, especially outcomes, objectives, and bibliographies, were up to date. While discussing this issue, Mike realized that even if the college implemented its own new curriculum review requirements, it would still need to satisfy the comprehensive review requirement of the UCC in order to obtain approval for changes. Therefore, it would be critical for the new system to aid in the university-level approval process, not hinder it.
forth, were all issues that would eventually need to be addressed. She knew that she would be relying heavily on Mike and Kevin to complete a very thorough and comprehensive analysis of the information requirements, and she anxiously awaited their report.
Analyzing Student Learning Outcomes Assessment System
once the system was stable, data entry and maintenance functions could be decentralized on a department-by-department basis, gaining support and momentum along the way. To help Alexis understand the type of entry that might be required in the new system, they provided a sample of data that the primary database tables might contain (Appendix E). When the beginning of the Spring 2004 semester approached, Mike and Kevin provided Alexis with a status report. They told her that they felt confident they would have all of the remaining system designs completed and documented, including all input screens and the base set of reports, before the end of January. They would also make recommendations, based on input from the CAC and the Associate Dean, about the implementation approach that should be pursued. In addition, they would complete a brief analysis of what different development tools might be utilized. They both believed that the final database would be developed in Oracle®, but this might require housing the data on a university server rather than relying on a college server; the advantages and disadvantages of this option would have to be evaluated. In any case, at the end of January, the system had to be ready to move into the development phase.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
Alexis reviewed the status report Mike and Kevin had prepared and realized she had only two months remaining to identify the development resources that would be necessary. She had a number of options to consider. However, at the moment, she wasn’t sure which would be the most cost effective, efficient, or attainable. She knew she would need to carefully consider all practical design and development strategies currently available given the unique issues and current business environment of the CBA. As she began to carefully identify and categorize the most significant challenges currently facing the project, she identified two main groups of issues; organizational/environmental issues and IT management issues.
disciplines would then have to develop a consensus on learning goals for their programs, consistent with their degree goals. These revisions would have to be done before the course-level outcomes and the assessment measures could be developed. Designing and implementing the SLOAS would also be a dynamic process that would need continuous refinement and enhancement. However, if this system was well designed, it was likely to help formalize and simplify the entire process.
Analyzing Student Learning Outcomes Assessment System
Alexis also began to realize that she would not be able to use any existing IT staff for her project because of their lack of development expertise and because they were already overextended in their operational support. She also learned that the university IT staff, and potentially the system-wide IT staff, had the capabilities she would require; however, the project could not be completed in time for the AACSB review if she relied on these resources, since system development for academic units was not a priority in their jobs. CIS faculty, primarily Mike and Kevin, could be involved in the development and maintenance of the system, but their use would not be cost free. Their inclusion might require obtaining course releases for them since their job descriptions focus on teaching, research, and service. They were each on nine-month contracts; therefore extending their contracts over the summer would require additional funding. Their only commitment would be as a service activity through committee assignments. As a result, their participation on this project would be sporadic, especially throughout the summer months, so maintaining momentum on the project would be difficult. However, Alexis did not want to forget about the prototype that Mike and Kevin had already developed. The prototype was being used to collect and summarize survey data. Though the original intentions had been for the prototype to be a “throwaway,” she was beginning to wonder if could become more of an “evolutionary” prototype. IT Implementation One of Alexis’ major concerns with the design and development of this system was the integration of the data with the larger, university-wide curriculum and enrollment systems already in existence. It was possible for SLOAS to source or download course and enrollment data from the campus-wide system; therefore, the data collected from and processed by the SLOAS would also need to be integrated into the campus-wide system for university-level learning outcome assessment purposes. However, integration raised other issues such as the inconsistencies between definitions of objectives and outcomes at the different levels. Alexis discovered that the university was accredited by one board while the college was accredited by another, and the data needed by each did not easily map from one onto the other. No student outcome system similar to SLOAS existed elsewhere in the university. Another integration issue was that UCC had no standard CCG document format. This and other similar issues needed to be coordinated between the CBA, Alexis and outside parties such as other colleges, the system-wide IT group, and the UCC. An articulated policy that established the required standards for both business processes and data formats was essential for development of this system. Being the leaders in the development of this system would provide the CBA and Alexis the opportunity to play the leading role in implementing a larger, highly visible, university-wide system, though it would also provide them with many other significant challenges. As Alexis reviewed and contemplated these issues, she wondered what the best development strategy would be. She knew she would need to be creative and resourceful if this project was to be a success.
AACSB International. (n.d.). Accreditation Standards: Assurance of Learning Standards. Available from: http://www.aacsb.edu/resource_centers/assessment/ standards.asp Apple, D. K., & Duncan-Hewitt, W. (1995). A Primer for Process Education. Corvallis, OR: Pacific Crest Software. Apple, D.K. et al. (1995). Foundations of Learning. Corvallis, OR: Pacific Crest Software. Berardi, M., & Stucki, C. (2003). Audit and control, SAC: System development overview. ITAudit, 6, (February 1). Barr, R.B., & Tagg, J. (1995). From teaching to learning: A new paradigm shift for undergraduate education, Change, (Nov/Dec), 12-25. Cleland, D. I., & Ireland, L. R. (2002). Project Management: Strategic Design and Implementation (4th edition). New York: McGraw-Hill. Gardiner, L. F., Anderson, C., & Cambridge, B. L. (1997). Editors, Learning through Assessment: A Resource Guide for Higher Education. Washington, DC: American Association for Higher Education (AAHE). Hoffer, J. A., George, J., & Valacich, J. (2001). Modern Systems Analysis and Design (third edition). Prentice-Hall. Johnson, R. A. (2000). The ups and downs of object-oriented systems development. Communications of the ACM, 43(10). Kendall, K. E., & Kendall, J. E. (2004). System Analysis and Design (sixth edition). Prentice Hall. Kloppenborg, T. J., & Petrick, J. A. (1999, June). Leadership in project life cycle and team character development. Project Management Journal, 30(2), 8-14. McConnell, S. (1996). Rapid Development: Taming Wild Software Schedules. Microsoft Press. Mylopoulos, J. (1999). From object-oriented to goal-oriented requirements analysis. Communications of the ACM, 42(1). Valacich, J. S., George, J. F., & Hoffer, J. (2003). Essentials of System Analysis and Design (second edition). Prentice-Hall. Wang, S. (1996). Two MIS analysis methods: An experimental comparison. Journal of Education for Business, 71(17). Whitten, J. L., Bentley, L., & Dittman, K. C. (2004). System Analysis and Design Methods (6th edition). New York: McGraw-Hill.
Challenges of Complex Information Technology Projects: The MAC Initiative Teta Stamati, University of Athens, Greece Panagiotis Kanellis, University of Athens, Greece Drakoulis Martakos, University of Athens, Greece
institutions were brought together into a single School of Education, and the Department of Design joined the Faculty of Technology. Furthermore, there were plans involving the splitting of the College Department of Human and Environmental Sciences into a Department of Sports Sciences and a separate Department of Geography and Earth Sciences. In addition, Isambard was for the first time planning to establish an Arts Faculty. This re-organization was the cause of considerable instability. Adding to these was the intensification of the competition for research funding. Changes in the Funding Council’s allocation model were directed towards greater selectivity in the use of research funding and an increased emphasis on research quality and proven research success. For these reasons, Isambard was experiencing a shift in its funding arrangements and had to obtain external funding to compensate for a reduction in central funds through the Higher Education Funding Council for England (HEFCE). Whereas in the past there were one or two revenue streams to be maximized, now there were at least five. These included: • • • • •
Central funding from the HEFCE based on a series of assessments (for example Research Assessment Exercise) Project-driven funding from UK research councils and from the European Community Collaborative and contract research for industry and commerce Overseas student fee income Conference accommodation and catering income
Hence it was towards the end of the ’80s and the beginning of the ’90s that Isambard University found itself exposed to an operating environment that in many respects was borrowing the business — like characteristics of the commercial sector. In the Vice Chancellor’s own words: “The only cloud on our horizon as we start the new year is the uncertainty of the environment in which we will be seeking to put those values [to continue to be a mixed teaching and research university which is financially sound; and to be characterized by teaching and research which is of relevance to its user community] into practice. 1995 entered with less clarity about the future of the UK Higher Education system than most of us working in it have ever known.” (Sterling, 1995, p. 16)
Challenges of Complex Information Technology Projects
The following elements constituted the framework for the university’s computing infrastructure: • • • • • •
UNIX for main service operating systems Networks based on X.25 and Ethernet IBM compatibility for PCs Adoption of UNIX- based workstations Application software of industry standards Centralized file service
It was also recognized that all administrative work ought to be underpinned with effective information and management systems. Historically, the administrative computing capability had been developed to service the central administrative functions. As management and administrative tasks and activities by departments and faculties increased, so did the need for support in those areas. This change in responsibility brought about the development, within some departments and faculties, of local systems to support their management and administrative activities and needs. In parallel with this, there was an increasing demand from departments and faculties for management information from central administration and support, in terms of access to system facilities. In 1988, it was observed that in terms of hardware, the host machine supported about the maximum number of peripherals it could, and was utilized beyond the normal expected level. This meant that any further expansion of support was not feasible without increasing computing power and capacity. In addition, the terminal access of administrative systems for individual departments provided via the university’s network did not provide an adequate response to those remote users, and the service level did not always fulfill their needs. It was not necessarily the case that the information held within the systems was inadequate, but barriers existed which prevented or hindered its use by the departmentally base staff that needed it. There were also issues associated with the data itself, and it was felt that they could probably be resolved by developing new hardware and software architectures to support the differing needs of the users. In summary, the main issues were: • • • •
Format and Structuring: Data was not formatted and structured so that it could be presented to the user in a useful and meaningful way. Access: There was limited access to the data caused primarily by technical constraints. Currency: Data was found to be current for one set of users but out of date for others, due to differences in need and timescale. Ownership: There were areas where lack of ownership definition and responsibility had resulted in a lapse in maintenance of the data. Where ownership was at the center, but data was derived from other sources, there were problems in maintaining it. An example was customer records where ongoing information was provided from many sources, but there was no area responsible for collecting the data and no means of distributed input. Any breakdown of communication resulted in central and departmental information being different.
Completeness: There was a wealth of information in all subject areas held by individual departments and within the faculties, which was not captured effectively. The necessary mechanisms (i.e., coordinated and integrated systems) did not exist to enable this to happen.
The software applications processing this data had been developed over the last 12 years. Their development had been tailored to the specific needs of the users that applied at the time of development or subsequent amendment. As management and administrative roles and responsibilities were undergoing change, new users were bringing in a new set of needs to be satisfied. Similarly, changing circumstances — unpredictable demands from the Universities Funding Council (UFC)4 and changing rules for allocating funds — and pressures were bringing about different needs. During the period of 1988-1990 it became clear that while the existing systems satisfied many of the central administrative requirements, new needs in both the management and administration of the university arose.
CASE DESCRIPTION: MANAGEMENT & ADMINISTRATIVE COMPUTING INITIATIVE
The UFC’s Management and Administrative Computing (MAC) initiative was announced in September 1988. The aim of the initiative was to promote the introduction of more effective and sophisticated systems to support the increasingly complex decisions that faced universities and colleges (Kyle, 1992). In addition, the systems were to provide the UFC with the information needed for allocating funds more effectively across the pool of universities. The cost of institutions ‘doing it alone’ was estimated at £ 0.5 million or more for each. To avoid this, the Universities Grants Committee (UGC5 — precursor to the UFC) commissioned a study to develop an information/data specification or ‘Blueprint’, which aimed to cover 80-90% of the needs of any single institution. A Managing Team was formed, and an initial study based on direct input from five universities and contributions from 20 more was completed. The team, comprising senior computing staff and university administrators, was chaired by the Vice Chancellor of the University of Nottingham. The UFC decided that they would only fund information technology developments for MAC that were organized to suit ‘families’ of universities. The objective was to group institutions into five or six families with similar computing requirements. Whilst geographic proximity was helpful in promoting frequent contact between the family members, it was not to be the only consideration. Others included similarity in size, structure, type of institution, existing collaboration (for example on purchasing), and computing development needs.
Challenges of Complex Information Technology Projects
In March 1989 the blueprint was sent to all universities, together with a request that each university prepare a ‘migration strategy’ report. This would have to include each university’s present administrative computing situation, both in terms of its computing hardware and its existing applications, and of its development priorities and requirements for the future and additionally: • • •
A comparison of the information needs of the university with the generalized blueprint and an identification of gaps between the two The identification of the characteristics of the institution in order for the Managing Team to classify it The development of an outline strategy for migration from the university’s existing systems to the outline architecture in the blueprint
Isambard’s migration strategy was prepared with the assistance of two consultants from Ernst and Young and emphasized the importance placed by the university on the provision of management as well as operational information. There were also two additional features that were highlighted: one was the need to conform to the university’s own Information Technology strategy6; the other was the fact that a new development platform had to be selected for any future systems, as the existing systems were coming to the end of their useful life. The preparation of Isambard’s migration strategy for MAC took place at about the same time and led to a decision to integrate management and administrative computing systems. This decision for integration was one of the principal factors that led to a commitment to the Oracle database platform as it was the one supported by the university’s computing services. This migration strategy was sent to the UFC in July 1989.
operandi had to be drawn up for the Family in addition to a plan of its activities. This was necessary in order to obtain funding from the UFC. The constitution established a Management Board in which each university had one representative and one vote. A Chairman was elected from among those representatives, and the Family incorporated as a limited company known as Delphic Ltd. The Board also decided to form a number of what they called Application Groups, one for each area of the management and administrative systems identified in the Price Waterhouse’s Blueprint. This did not mean that the groups had to undertake the development of the systems themselves, but that they were to be responsible for working directly with the commercial contractors employed by the Family. Each member of the Family had to be a member of at least one group, and Isambard took the decision to join the Management Information Application Group.
Challenges of Complex Information Technology Projects
The outcome was that Mantis UK was offered the contract to develop the full set of management and administrative systems. The recommendation was formally accepted by a meeting of the Management Board in September 1990, and a contract was subsequently drawn up with Mantis UK with the assistance of specialist legal advice. The complexities of the negotiations over the contract were such that it was not formally signed until May 1991, although the work itself started and continued during the negotiation period. Although the MAC system was designed as one closely integrated system, its software was to be made available in phases (see Appendix). All applications, with the exception of payroll, would use SQL Forms V.3 with pop-up windows etc. as part of the user interface. The Finance application was based on Mantis’s own accounting package that was to be enhanced to cater for the additional functionality requested by the Family. Whenever the Mantis development team finished writing and testing a release of software, this was to be passed over to the appropriate Application Group for them to run their own acceptance tests on it. It is important to note that the ‘80/20’ rule applied here. A small part of the system was left to the discretion of the programmers working at each of the universities, who after an Mantis software release and in close cooperation with Mantis developers, would attempt to ‘tailor’ the system to the specifics of the sites (Pollock, 2001). If an institution was encountering problems in running the software, the ‘Delphic Support Desk’ had to be contacted. This would assess the problem and then pass the solution back to the institution responsible for the particular application. If the problem could not be resolved, it was forwarded back to Mantis which had to redesign and rebuild the application.
Management & Administrative Computing Initiative Outcome for the Delphic Family
Towards the end of 1994 and with the funding for the MAC Initiative nearing its termination date of March 31, 1995, the Delphic members were experiencing severe delays concerning the delivery of the main application packages. The Anticipated Availability Schedule (see Appendix) shows the time slippages. Kyle (1994) summarized some of the main causes for the delays as follows: 1. 2. 3. 4. 5.
The design of the Student Structure was found to be flawed, and had to be redone. Mantis’s decision to merge its development team responsible for its own Finance package with the one responsible for the MAC’s Finance module. The loss of senior Mantis development staff, particularly during critical design stages. The introduction of a new stage: implementation by a test (lead) institute between the end of acceptance testing and the release of an application in its supported state. The decision of Delphic to make modules available in ‘baskets’. This meant that the first module accepted had to wait until the acceptance of the last module in the basket before it could be implemented.
Challenges of Complex Information Technology Projects
not been anticipated — that of semesterization8. It was felt as something that was clearly overdue, a departure from a rigid and inflexible academic structure that originated in the beginning of the last century to a more open and clearly cost-effective scheme. As a result of semesterization, Isambard, for example, was able to increase considerably its student numbers by offering a wider range of choice regarding the structure of its courses, rather than only the four-year thin sandwich course option. This change affected mainly the Student Module. The fact that in 1994 parts of it had not been contracted (see Appendix), although the initial delivery date for the completed module was July 1992, shows clearly the magnitude of the effect that this change had. The Student Module was driven by what was called “Program Structures” — schemes of study. “Program Structures” was designed in such a way that in an attempt to provide for integration, every single module was required to know what the structure was when dealing with student administration. For example, the Student Registration, Student Finance, and the Assessment and Degree Conferment modules related first of all to the Program Structure and its maintenance, and in effect were totally dependent on it. This module’s development had to start virtually from scratch again because of semesterization, and it was estimated that its delivery had to be put back by a year to 18 months. Twenty-six months later and there was still no definite delivery date, although an estimation was that a ‘formal’ deliverable had to wait for another two years. Needless to say, no member of the Family could afford to bear the cost of a product that had not been proven to work, and in which acceptance tests had to take place throughout a whole academic year and be evaluated against the annual cycle of activities. The metaphor of the old lady who is trying to cross the road and waits for someone else to do it first, in order to see if he gets run over, illustrates the case. Angela Crum Ewing, deputy registrar at Reading University (a member of the Delphic Family), said after they decided to hold onto their in-house applications, rather than implement a MAC solution: “MAC is in a position of transition. We did not want to commit to a new, untried system, when we had our own in-house systems which worked well” (Haney, 1994). A ‘sneak preview’ of the modules by Family members resulted in a lot of skepticism about the future, stemming from the fact that continuous disappointment would mean dissatisfied stakeholders who will not stop placing pressure in favor of project abandonment. The effect of semesterization had major repercussions not only on Mantis UK as the system developer, but on all members in the Family who were counting on the deliverables and had already made their migration plans. For Isambard University, only the quantifiable costs amounted to the region of more than £50,000 — two extra man-years of further systems development work that no one had anticipated.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
Students: Although at the time Isambard’s existing system infrastructure could hardly accommodate semesterization, the administration of the university, tired of waiting for Delphic to come up with a deliverable, was pushing persistently for a new system. In November 1993, after ‘shopping around’ for any Mantis-based student system in use that could be able to satisfy Isambard’s own requirements, a decision was made to consider the system of the University of Liverpool. After some time it was found out that for a number of reasons, this was not the solution either. Firstly the system was designed to meet Liverpool’s own requirements in a very specific way and it was never developed as a package for other universities to use. Isambard’s own requirements were completely different to theirs. Secondly, it was developed on an older version of Mantis. This meant that its blind adoption would pose problems in the future concerning its integration with any Delphic deliverables. On the other hand, an attempt to modify it would mean major overhead. Finally, from a technical point of view, the system was not documented — a ‘black box’ in the systems team’s own words. Isambard had no alternative but to develop and design its own in-house student system whose first phase went live in the first week of October 1994 to coincide with the beginning of the new academic year. The system covered the Registration process, but no project was under way regarding the two other main areas — Student Finance and Student Accommodation. Finance: The development of the Finance module which was a base offering from Mantis UK and which had been enhanced to meet the extra requirements, was also off schedule. As a result, an Mantis quasi-commercial accounting package was adopted and implemented. The package had nothing specific to offer to universities, and if there were a choice, it would have not been taken on board by Isambard. It was developed by Mantis UK (in much the same way as Price Waterhouse delivered its MAC Blueprint) in an attempt to quickly capture a slice of the off-theshelf software market when it had decided to enter it a couple of years ago. This meant that several enhancements were necessary and it took more that 200 person hours alone to determine whether or not it could replace the existing system. Subsystems to deal with the maintenance of research contracts, and to allow for the issuing of monthly statements of accounts to heads of departments and senior researchers, were designed, and eventually the system went ‘live’ in August 1994 — the beginning of the new financial year.
Figure 1: MAC Modules Adopted by Isambard University after Almost Six Years of Systems Development
Challenges of Complex Information Technology Projects
Staff: Following the installation and assessment of the pre-release version of the first module from Delphic (Posts, People, Appointments and Organization), the implementation team agreed and the old system was subsequently discontinued in September 1993. It was replaced by this and the second module (Skills, Recruitment and USR Return). However, at that time (September 1993) Delphic still had not provided any documentation for the system. Physical Resources: The initial Delphic offering proved to be an ‘overkill’ for Isambard’s requirements. It provided more than was actually needed, and two key areas had already been covered by in-house-developed Mantis systems. One area was the administration of the university’s own housing facilities and the people who occupied them, and the other was an inventory system for mobile equipment. The Delphic offering still held some level of attraction to Isambard’s Management Services team, but only when used in conjunction with the Delphic Finance Module, as it offered the facility the option to debit directly a departmental account at a store as soon as an item was issued out. The Stock Control Module was at the time running at test mode, but as these two packages were designed to be highly integrated, there was a deadlock situation as the Finance module had not been delivered. Moreover, as mentioned above, a commitment had been made to the inhouse-developed finance system, which was unlikely to be replaced for at least two years. Research and Consultancy: No view had been formed about this module as there had not been a delivery. Supposedly it provided the ability to maintain profiles of staff and possible customers who could require applied research to be undertaken by the university on their behalf. An in-house-developed Mantis system was then in operation centered around publications of Isambard staff and information on customers. The accounting side (e.g., the recording of costs against research projects) was partly accommodated by the core finance system. Again, it was rather like Physical Resources — nothing particularly attractive given the overhead in implementing either of the Delphic modules that tended to be reasonably sophisticated for Isambard requirements. Payroll: A bureau service from a leading UK bank catered for the payroll function at Isambard. The consensus of the Director of Financial Services was that it was adequate, and therefore he was cautious and opposed any change. What were however lost by this decision were the integration and the economies, such as saving in paperwork and clerical time that came with the Delphic module, and that were associated with raising the cost of various processes between the two interconnected functions — payroll and personnel. However, the high level of integration offered between Delphic’s Payroll (not delivered at that time) and Staff modules were attractive to Isambard, as it had implemented the latter. After some careful consideration it became clear that its adoption was very unlikely to happen, as at the outset it seemed a very general package; again, many enhancements would have been necessary. This was a significant requirement considering the size of Isambard’s Management Systems Team and its constrained time scales. Management Information: Similarly, no ‘final’ view was formed. There had been a development where Management Information was considered to be the ‘Cinderella’
Challenges of Complex Information Technology Projects
that they were offering. Graham Kyle, manager of the Management Services team, summarized eloquently the situation: “…as you can observe, the way we are staggering here at Isambard, there is no sign of integration as far as we are concerned.” One feature of Delphic that did not apply to any of the other families was that from day one the deliverable was designed as one system. It caused Mantis UK problems because, when the first major slippage occurred (the Students Module), Mantis had to respond to pressure from the Delphic representatives who demanded some deliverables.” This meant that Mantis had to unbundle the system by separating and redesigning the links, a major cause for MAC’s failure to meet deadlines. Almost all deliverables were at least two years late, according to the dates quoted by Mantis UK in the original specification, and this caused considerable stress and frustration to Isambard, which had to decide which route to follow regarding its infrastructure: to wait and see how Delphic would handle the situation after the termination of the contract with Mantis, to see how to integrate the various probable solutions described in the beginning of this section or to make a fresh beginning abandoning all previous investments? Difficult choices indeed and hardly the type one expects to be faced with at the end of an information technology development project that started with the best of expectations.
Haney, C. (1994). Universities snub software policy. Computing, (September 22), 14. Hillicks, J. (2002). Development Partnerships between HE and Vendors: Marriage made in Heaven or Recipe for Disaster? Online: www.jiscinfonet.ac.uk/Resources/ external-resources/development-partnerships/view Kanellis, P., & Paul, R.J. (1995, August 2-5). Unpredictable change and the effects on information systems development: A case study. In W.A. Hamel (Ed.), Proceedings of the 13th Annual International Conference of the Association of Management (pp. 90-98). Vancouver, BC: Maximilian Press Publishers. Kanellis, P. (1996). Information Systems and Business Fit in Dynamic Environments. Unpublished PhD Dissertation, Brunel University, UK. Kyle, G. W. (1992). Report on the UFC MAC Initiative. London: Brunel University. Kyle, G. W. (1994). MAC Situation Report. Brunel University, UK. Philips, T. (1996). MAC Progress Report. Online: www.bris.ac.uk/WorkingGroups/ ISAC/13-2-96/i-95-10.htm Pollock, N. (2001). The tension of work-arounds: How computer programmers. Paper submitted to Science, Technology & Human Values. Sterling, M. (1995). Vice-Chancellor’s Report to Court. Brunel University, UK.
Royal Charters have a history dating back to the 13th century. The original purpose was to create public or private corporations and to define their privileges and purpose. Nowadays, Charters are normally reserved for bodies that work in the public interest and can demonstrate pre-eminence, stability and permanence in
their particular field. Many older universities in England, Wales and Northern Ireland are also chartered bodies. Sandwich courses involve a period of work in industry or a commercial organization. On a ‘thick’ sandwich course, the student spends the third year working away from university. The ‘thin’ sandwich course has placements lasting six months each calendar year. The CNAA was founded by Royal Charter in 1964, with the object of advancing education, learning, knowledge, and the arts by means of the grant of academic awards and distinctions. UFC became the Higher Education Funding Council for England (HEFCE) which was established following the Further and Higher Education Act 1992. A principal feature of the legislation was to create one unified higher education sector by abolishing the division between universities and polytechnics. Under the education Reform Act of 1988, the University Grants Committee (UGC) was replaced with the Universities Funding Council (UFC) which in turn was replaced by the Higher Education Funding Council for England (HEFCE) to conform to the Further and Higher Education Act 1992 which made provision for a single system of higher education, with a unified funding structure and separate funding councils for England, Scotland and Wales. It was during 1989 that Isambard University was required to prepare a renewed internal information technology strategy to support its bid to the UFC’s Computer Board for funds related to academic computing from 1990 onwards. The principal objective of the strategy was to make available a range of integrated computing facilities to staff and students throughout the University using an infrastructure of distributed computing based on campus networking. Members comprised of the chairmen of the six Applications Groups, plus a couple of other members nominated by the Management. A standard of measurement in higher education used to group weeks of instructional time in the academic calendar. An academic year contains a minimum of 30 weeks of instructional time. An individual semester provides about 15 weeks of instruction, and full-time enrollment is defined as at least 12 semester hours per term. The academic calendar includes a fall and spring term, and often a summer term.
CEM Corporation2 is a world leader in the design, development and manufacture of Internetworking storage IT infrastructures. The company’s core competencies are in networked storage technologies, storage platforms, software, and, also, in services that enable organizations to better and more cost-effectively manage, protect and share information. CEM was founded in 1979 and launched its first product in 1981 — a 64kilobyte integrated circuit memory board developed for the then popular Prime minicomputer platform. CEM’s sales passed the $3 million mark in 1982 and reached $18.8 million two years later. In the mid-1980s, CEM launched a series of memory and storage products that improved performance and capacity for minicomputers made by IBM, HewlettPackard, Wang, and Digital Equipment Corporation. The company went public in April 1986; a year in which sales hit $66.6 million and a net income of $18.6 million was achieved. In the late 1980s, CEM expanded strongly into the auxiliary storage arena, where it remarketed other suppliers’ magnetic disk drive storage subsystems, often coupled with its own controller units. In 1987, the company introduced solid state disk (SSD) storage systems for the mini-computer market and its headquarters moved to Hopkinton, Massachusetts. In 1988, its stock was listed on the New York Stock Exchange and in 1989 CEM accelerated the transition from a supplier of memory enhancement products to a provider of mass storage solutions. In 1997, more than 70% of the company’s engineers were dedicated to software development for mass storage technologies. Software sales rose from $20 million in 1995 to $445 million in 1998, making CEM the fastest growing major software company in the industry sector. In 2001, CEM was named as one of Fortune’s 100 best companies to work for in America. In the same year, the company launched a major new global branding initiative. CEM Corporation’s total consolidated revenue for 2002 was $5.44 billion.
SETTING THE STAGE
From its inception, CEM recognized the importance of learning within the organization: accordingly, it facilitated learning development and support for its employees, including: technical skills; business skills; IT skills; management skills; and individual personal development. Prior to 2000, learning development and support was facilitated through a number of training services, which included: • • • • •
A Corporate University, which provides training throughout CEM, including induction training for new staff, corporate guidelines, professional and project management guidelines, and computer skills. A Professional Global Services Training department, which supports field and sales staff at CEM. A Global Technical Training Department, whose main aim is to address the advancing technologies in the ever-evolving hardware, software products, and support applications and processes. Human Resources Training Centers, which support the soft skill training of managers, supervisors and individual employees. Technical Libraries and Personal Development Libraries.
A Continuing Education Program, which provides financial support and study leave.
These diverse training services within CEM had, for some time, been successfully delivering training and learning support to a number of distinct areas within the corporation. However, by the year 2000, CEM recognized that it was facing a number of key challenges in relation to its organizational learning processes. These included the following: •
As a large multinational organization with a constantly growing global workforce of 20,000-plus employees, the overall management of the learning of all employees using multiple training organizations was becoming increasingly difficult. In particular, the management of course enrollments, training paths and individual competency levels posed a significant challenge. There was some duplication of effort across many of the training services and a distinct lack of consistency in how training was being developed and delivered. Specifically, there was a lack of coherence in relation to how content was being created and administered. From the point of view of an employee, there was no overall catalogue of courses available that outlined the training or learning programs available from each of the training services. By 2000, the business environment in which CEM Corporation operated was rapidly evolving and becoming more intensely competitive: hence, learning and the management of learning began to play an increasingly critical role in the ongoing success of the organization. Within this context, CEM needed to replace the isolated and fragmented learning programs with a systematic means of assessing and raising competency and performance levels of all employees throughout the organization. In addition, CEM wished to establish itself as an employer of choice by offering its people extensive career planning and development opportunities.
In response to these challenges, CEM decided to implement an enterprise learning solution. The stated business drivers for deploying this enterprise learning solution were to: • • • • • •
Decrease time-to-competency. Develop and manage skill sets for all employees. Leverage global, repeatable and predictable curriculum. Integrate competency assessments with development plans. Accelerate the transfer of knowledge to employees, partners, and customers. Provide a single learning interface for all internal and external users.
tracking of individual personal development training. Having considered several LMS then available from different vendors, CEM Corporation chose Saba Learning Enterprise™ (see Appendix A for a brief overview of Saba Software Inc.). In February 2001, CEM deployed its enterprise learning solution, incorporating this new LMS to employees across the entire organization as well as to CEM customers and business partners. Based on an exhaustive analysis of previous research in the area and an extensive case study of the deployment and use of Saba Learning Enterprise™ at CEM Corporation, this article proposes a framework that places LMS in context with other categories of IS said to underpin learning in organizations. The framework also highlights the roles that LMS can play in the support and management of learning within knowledge-intensive business enterprises. Thus, it is hoped that this framework will deepen the IS field’s understanding of the contribution of LMS to learning within organizations.
Motivation for the Study Significance of Learning in Organizations The importance of facilitating and managing learning within organizations is well accepted. Zuboff (1988), for example, argues that learning, integration and communication are critical to leveraging employee knowledge; accordingly, she maintains that managers must switch from being drivers of people to being drivers of learning. Harvey and Denton (1999) identify several antecedents that help to explain the rise to prominence of organizational learning, viz. • • • • • •
The shift in the relative importance of factors of production away from capital towards labor, particularly in the case of knowledge workers. The increasing pace of change in the business environment. Wide acceptance of knowledge as a prime source of competitive advantage. The greater demands being placed on all businesses by customers. Increasing dissatisfaction among managers and employees with the traditional “command control” management paradigm. The intensely competitive nature of global business.
(Butler, 2000; Galliers & Newell, 2001; Swan, Scarborough & Preston, 1999), and its popularity may have been heightened by glossing over complex and intangible aspects of human behavior (Scarborough & Swan, 2001). New Potential Offered by Learning Management Systems It is perhaps time to admit that neither the learning organization concept, which is people oriented and focuses on learning as a process, nor the knowledge management concept, which focuses on knowledge as a resource, can stand alone. These concepts compliment each other, in that the learning process is of no value without an outcome, while knowledge is too intangible, dynamic and contextual to allow it to be managed as a tangible resource (Rowley, 2001). She emphasizes that successful knowledge management needs to couple a concern for systems with an awareness of how organizations learn. Researchers believe that what is needed is to better manage the flow of information through and around the “bottlenecks” of personal attention and learning capacity (Brennan, Funke, & Andersen, 2001; Wagner, 2000) and to design systems where technology services and supports diverse learners and dissimilar learning contexts (McCombs, 2000). In response to these needs, learning management systems (LMS) evolved; accordingly, an increasing number of firms are using such technologies in order to adopt new approaches to learning within their organizations. This new learning management approach has been led primarily by practitioners and IT vendors; as it is a relatively new phenomenon, there is a dearth of empirical research in the area. Therefore, an important challenge for the IS field is to better understand LMS and to examine the role that these new systems play in organizations.
The Enterprise Learning Solution implemented by CEM Corporation consists of several components, one of which is an LMS called Saba Learning Enterprise™ (Figure 1). Much of the learning material is created and maintained by CEM employees using a range of off-the-shelf products that includes Microsoft Office, Adobe Acrobat and Saba Publisher, while the systems learning content is stored in CEM’s own on-site storage repository. In addition, courseware is created and maintained directly by third parties including KnowledgeNet and Netg, and is stored offsite in the storage repository of both third-party organizations. Employees at CEM manage their own learning processes by accessing the LMS through the Internet. Using the Web, they can enrol in classroom courses; search for learning material; engage in online learning activities; and look at what development options are suitable for their role within the organization. Managers also use the system to administer the employee learning processes; for example, managers can examine the status of the learning activities of their employees; assign learning initiatives to their employees; and generate reports on learning activities. Administrators and training personnel use the system to supervise employee training; for example, they publish and manage learning content; manage a catalogue of courses; and create reports on learning activities. While much of the required reporting is provided by the LMS, administrators also use a third-party software application called Brio to generate more sophisticated reports. The Saba Learning Enterprise™ LMS has the capability of managing and tracking offline activities (e.g., books, “on the job” training, mentoring, classroom training) and online activities (e.g., video and audio, Webcasts, Web-based training, virtual classroom Figure 1: CEM Corporation — Enterprise Learning Solution Components Em ployees
W eb Application Interface A
Author and M aintain Learning Content (S a b a P u b lis he r, M ic ro so ft O ffice , A d ob e A cro b a t, e tc .)
W eb Application Interface B
Video & Audio
Learning M anagement System (S a b a Le a rn ing E n te rpris e™ )
Books Rich Media on Demand (g fo rc e )
Users Functionality - E n ro lm e nts , Tra n sc rip ts , C o m p e te nc ies , D e velo p m e nt P la n s, C u rricu lu m P ath s, Le a rn ing C a ta lo gu e
'on the job' Training W ebcasts (P la c eW a re )
CEM Corporation Learning Content Repository
Third Party Courseware Repository (K n o w le d g e N e t, N e tg )
M anagers Additional Functionality - T e a m S um m a ry, P rofile s, Initia tive s, R e p o rts
Administrators Additional Functionality - C o n te n t M a n a ge m e n t, C a ta lo g ue M a na g e m e nt, R e p o rts
W eb Based Training (K n o w le d g e N e t, H a rvard B u s in e ss O nlin e , T ho m s on )
training, and rich media). Learning content for online activities may be accessed and delivered through the Web application interface either from CEM’s own learning content repository or from a third party’s storage repository. Certain post-training testing is built into the learning content itself, but additional pre-training testing and post-training testing may be invoked, and this is provided by another third-party product called QuestionMark.
LMS: Toward a Better Understanding
Figure 2 summarizes the case study findings. In this diagram, an empirically tested framework is presented that places LMS in context within a wider topology of the key categories of IS that underpin learning in organizations. Furthermore, the framework describes the principal attributes of each category of IS and highlights the roles that LMS can play in the support and management of learning within an organization. The categories of IS have been segregated into two groups: those that support formal managed learning within the organization, and those that support informal or unmanaged learning. The IS category of LMS is highlighted within the framework to emphasize that this new breed of system is central to the strategic “people oriented” approach to managing learning that is now emerging in many organizations.
Figure 2: Learning in Organizations — Framework Incorporating LMS
Information Systems that Support Learning in Organizations
Formal Managed Learning
Informal Unmanaged Learning
Learning Content Management Systems Roles - Provide learning content repository - Facilitate content authoring - Enable delivery of content - Provide content administration
Learning Management Systems Roles - Support training administration (registration, scheduling, delivery, testing/tracking) - Support diverse learners within diverse learning contexts - Facilitate competence development to meet particular business objectives (top down, bottom up) - Enable cohesive learning throughout the enterprise - Encourage accountability for learning among employees - Enable monitoring and analysis of the 'learning condition' of the organization - Support training planning - Increase learning in the organization - Increase productivity of training - Evaluate individual learning performance - Provide post learning support - Signalling system for changes in the organization
Information Systems Practices that Facilitate Ad Hoc and Informal Learning Roles - Speed up knowledge acquisition - Facilitate information interpretation - Expand information distribution - Facilitate organizational memory Examples - Email / Video Conferencing / Groupware - Decision Support Systems - Management Information Systems - Executive Information Systems - Intranet/Internet Systems - Datawarehouse Systems - Enterprise Resource Planning - Customer Relationship Management
Knowledge Management Systems Roles - Code and share best practices - Create corporate knowledge directories & repositories - Create knowledge networks
Learning / Training Environments Roles - Facilitate training and learning Examples - 'On the job' Training - Mentoring - Classroom based instruction - Synchronous computer assisted instruction (video & audio, rich media on demand, webcasts) - Interactive computer based training (online training, multiple media, hypermedia) - Virtual Learning Environments
Examples - Data mining - Electronic bulliten boards - Discussion forums - Knowledge databases - Expert systems - Workflow Systems
manager within a certain limited time period. In this way, employees are encouraged to self-manage their own learning using the LMS: this has the added benefit of encouraging accountability for learning among employees (see also Hall, 2000). The use of competency models for assessing and developing employee capabilities forms the basis of a number of other evolving roles of the LMS. Through standardizing role-based competency requirements and development options, the LMS can enable more consistent and cohesive learning throughout the enterprise (see also Greenberg, 2002). The LMS manager pointed out that “the status of competencies within the organization may be reported on at a number of different levels, using the LMS.” This enables the monitoring and analysis of the “learning condition” of an organization (see also Nichani, 2001). Furthermore, a department manager described how “the LMS can support a manager in assessing an employee’s role-based competencies and having agreed development plans with that employee, a subsequent competency assessment can help that manager to determine the employee’s ‘learning performance’ in acquiring the new competencies, as per the development plan.” Thus, by reviewing progress between one competency assessment and the next, the evaluation of individual learning performance for an employee is facilitated. This may then form part of the individual’s overall performance evaluation.
CEM Corporation: Overall Benefits of the Enterprise Learning Solution
The deployment of the enterprise learning solution has enabled CEM Corporation to address many of the challenges that it faced in 2000, prior to the system’s implementation. In particular, CEM has achieved the following: •
CEM now has a single enterprise system that supports the administration of all training across the entire organization. From the point of view of the employees, the system provides a centralized mechanism that enables them to search for and to enrol in selected courses or training programs; it also offers guidance on recommended training paths and curriculums. Furthermore, the competency assessment facility enables employees to determine and rectify competency gaps as well as providing management at CEM with a means of monitoring and managing overall employee competency levels within the organization. The enterprise learning solution supports all training content whatever its subject matter or form and enables the management and control of access to this content using one system. This has the added advantage of highlighting duplication of training material in different parts of the organization and paves the way for streamlining the efforts of different training services within the company. The flexibility and dynamic nature of the system allows CEM Corporation to unilaterally introduce and to quickly implement new training requirements across the organization in response to changing business needs or new technical advances. The Saba Learning Enterprise™ LMS may help to attract or retain key personnel by offering them a unique opportunity to monitor and develop their competencies and to manage their careers within the organization.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
As outlined earlier, CEM Corporation is a hi-tech organization that operates in a very competitive and dynamic business environment. Managing learning and measuring learning outcomes are in themselves difficult tasks, but they are made even more problematic within complex learning domains, such as those that exist at CEM. It is unlikely that the LMS will enable the full management of all of the learning in the organization in a truly scientific way, though it will assist greatly in managing the diverse and extensive array of learning contexts and learning processes that must be supported. The system’s strengths lie in the new approach and attitude that it will encourage and inspire in the hearts and minds of individuals within the organization, as it enables learning that is highly visible, structured, and more accessible within the organization. This stimulation of the hearts and minds is a major contributing factor to learning and is known as “emotional quotient” (Goleman, 1996). Having deployed the enterprise learning solution, CEM Corporation now faces a number of key challenges. These are outlined next:
Control vs. Creativity: Managing the Delicate Balance
The findings of this case study demonstrate that CEM’s new LMS can play a vital role in increasing learning within across the organization. This will be achieved by improving the control and management of employee competency levels, and also by empowering employees to be creative in managing their own learning and competency development. Thus, the key challenge for management at CEM is to increase their influence and control over training and learning within the organization, while at the same time increasing employee commitment to managing their ongoing self-development by taking responsibility for improving their knowledge of the business and building related competencies. These objectives are delicately balanced and must therefore be handled carefully. Too much control may de-motivate employees and discourage them from engaging with the system, but at the same time, enough control must be exerted to ensure that employees are developing competencies that support the day-to-day operational requirements of the organization, as well as being in sync with the overall goals and objectives of the company.
Exploiting the Benefits of the LMS: Incorporating all Training & Learning
manage associated learning outcomes, as this training is initiated and completed outside of the LMS, and is not currently recorded by it. It is understandable that it will take some time to incorporate every training program for all employees onto the system, but it is critical that this is achieved as quickly and efficiently as possible, to ensure support for the system and ongoing use of the system across the entire organization.
Drawing Up Competency Models for All Employees
Role-based competency models have not yet been drawn up for all roles within the organization. As the LMS Manager pointed out, “there is difficulty in having accurate competency models for all roles when there is such a vast array of diverse technical positions.” He added that “as you drill down, you find that there are a lot of specialist functional competencies and you get into the ROI question…because there is such a large investment in time and effort involved in devising competency models for all technical roles, it has to be driven by the local business needs”. Competency assessments are instrumental to determining if positive learning outcomes have been achieved and they will also demonstrate if the organization is obtaining a return on its investment in implementing and deploying the LMS. Furthermore, competency assessments offer management at CEM an opportunity to identify and rectify gaps or overlaps in competency levels as well as providing a means of assessing and managing overall competency levels within the organization. CEM Corporation is now faced with the daunting task of drawing up and maintaining competency models for the vast array of role types of its 20,000-plus employees, many of whom work in dynamic and highly technical areas.
Managing the Competency Assessment Process
Even where competency models are available, the study revealed that the process of self-management of career development has, for the most part, not yet been taken up within the organization. Moreover, many employees, and indeed managers, have not yet engaged with the competency assessment process. A structured plan or roadmap needs to be formulated in conjunction with local business needs for the formal migration of all employees onto the system for competency assessment and competency development planning to take place.
Fully Mobilizing the LMS within the Organization
One manager observed that “many employees still feel that the system is primarily designed for course registration and the other elements of the system may need to be emphasized more internally.” Another user of the LMS argued that “although the initial rollout of the LMS seems to have been good and although there is a growing awareness of the system, people still have not got to grips with using it.” The challenge facing CEM Corporation is to raise internal awareness of the functions and capabilities that are now provided by the LMS, and to educate the employees on how these functions and features operate. This education program needs to address cultural issues, as well as dealing with the fears and anxieties that employees may have in relation to the use of the system. This finding was supported by one manager who noted that “some employees may fear that if they use the system to log their competencies, their career may be negatively affected.”
CEM Corporation needs to encourage the active participation of senior management in the mobilization of the LMS and perhaps consider the appointment of an overall champion for the initiative at senior management level. This chief learning officer4 could promote the utilization of the system at a senior level within the business units and ensure that any synergies that exist between them are exploited. Finally, a number of managers felt that CEM needs to publicize and promote the benefits of engaging with the LMS and find ways of formalizing and integrating this novel strategic learning management system with extant business processes and work practices.
Alavi, M., & Leidner, D. (1999). Knowledge management systems: Issues, challenges and benefits. Communications of the Association of Information Systems, 1(2). Alvesson, M., & Kärreman, D. (2001). Odd couple: Making sense of the curious concept of knowledge management. Journal of Management Studies, 38(7), 995-1018. Barron, T. (2000). The LMS guess. Learning Circuits, American Society for Training and Development. Online: http://www.learningcircuits.org/2000/apr2000/ barron.html Borghoff, U.M., & Pareschi, R. (1999). Information Technology for Knowledge Management. Heidelberg: Springer-Verlag. Brennan, M., Funke, S., & Andersen, C. (2001). The learning content management system: A new elearning market segment emerges. IDC White Paper. Online: http:/ /www.lcmscouncil.org/resources.html Butler, T. (2000, August 10-13). Making sense of knowledge: A constructivist viewpoint. Proceedings of the Americas Conference on Information Systems, Long Beach, CA (vol. II, pp. 1462-1467). Butler, T. (2003). From data to knowledge and back again: Understanding the limitations of KMS. Knowledge and Process Management: The Journal of Corporate Transformation, 10(4), 144-155. Chait, L.P. (1999). Creating a successful knowledge management system. Journal of Business Strategy, 20(2), 23-26. Easterby-Smith, M., Crossan, M., & Nicolini, D. (2000). Organizational learning: Debates past, present and future. Journal of Management Studies, 37(6), 783-796. Galliers, R., & Newell, S. (2001, June 27-29). Back to the future: From knowledge management to data management. Global Co-Operation in the New Millennium, 9th European Conference on Information Systems, Bled, Slovenia (pp. 609-615). Garavelli, A.C., Gorgoglione, M., & Scozzi, B. (2002). Managing knowledge transfer by knowledge technologies. Technovation, 22, 269-279. Goleman, D. (1996). Emotional Intelligence. London: Bloomsbury Publishing. Greenberg, L. (2002). LMS and LCMS: What’s the difference? Learning Circuits, American Society for Training and Development. Online: http:// www.learningcircuits.org/2002/dec2002/greenberg.htm Hall, B. (n.d.). Learning Management Systems 2001. CA: brandon-hall.com. Harvey, C., & Denton, J. (1999). To come of age: Antecedents of organizational learning. Journal of Management Studies, 37(7), 897-918.
Hendriks, P.H. (2001). Many rivers to cross: From ICT to knowledge management systems. Journal of Information Technology, 16(2), 57-72. Huber, G.P. (2001). Transfer of knowledge in knowledge management systems: Unexplored issues and suggestions. European Journal of Information Systems, 10(2), 72-79. Marshall, C., & Rossman, B.G. (1989). Designing Qualitative Research. CA: Sage. McCombs, B.L. (2000). Assessing the role of educational technology in the teaching and learning process: A learner centered perspective. The Secretary’s Conference on Educational Technology, US Department of Education. Online: http://www.ed.gov/ rschstat/eval/tech/techconf00/mccombs_paper.html McDermott, R. (1999). Why information technology inspired, but cannot deliver knowledge management. California Management Review, 41(4), 103-117. Nichani, M. (2001). LCM S = LMS + CMS [RLOs]. Online: http://www.elearningpost.com/ features/archives/001022.asp Rowley, J. (2001). Knowledge management in pursuit of learning: The learning with knowledge cycle. Journal of Information Science, 27(4), 227-237. Scarbrough, H., & Swan, J. (2001). Explaining the diffusion of knowledge management: The role of fashion. British Journal of Management, 12, 3-12. Schultze, U., & Boland, R.J. (2000). Knowledge management technology and the reproduction of knowledge work practices. Journal of Strategic Information Systems, 9(2-3), 193-212. Storey, J., & Barnett, E. (2000). Knowledge management initiatives: Learning from failure. Journal of Knowledge Management, 4, 145-156. Sutton, D.C. (2001). What is knowledge and can it be managed? European Journal of Information Systems, 10(2), 80-88. Swan, J., Scarbrough, J., & Preston, J. (1999, June 23-25). Knowledge management — The next fad to forget people. In J. Pries-Heje et al. (Eds.), Edition Proceedings of the 7th European Conference on Information Systems, Copenhagen, Denmark (vol. III, pp. 668-678). Wagner, E.D. (2000). E-learning: Where cognitive strategies, knowledge management, and information technology converge. Learning without limits (vol. 3). CA: Informania Inc. Online: http://www.learnativity.com/download/LwoL3.pdf Zeiberg, C. (2001). Ten steps to successfully selecting a learning management system. In L. Kent, M. Flanagan, & C. Hedrick (Eds.). Online: http://www.lguide.com/ reports Zuboff, S. (1988). In the Age of the Smart Machine: The Future of Work and Power. New York: Basic Books.
Aldrich, C. (2001). Can LMSs survive the sophisticated buyer? Learning Circuits, American Society for Training and Development. Online: http:// www.learningcircuits.org/2001/nov2001/ttools.html Broadbent, B. (2002). Selecting a Learning Management System. Brown, J., & Duguid, P. (1998). Organizing knowledge. California Management Review, 40(3), 90-111. Evangelisti, D. (2002). The must-have features of an LMS. Learning Circuits, American Society for Training and Development. Online: http://www.learningcircuits.org/ 2002/mar2002/evangelisti.html Hall, B. (2003). LMS 2003: Comparison of Enterprise Learning Management Systems. CA: bandon-hall.com. Hodgins, W., & Conner, M. (2002). Learning objects and learning standards. IEEE Learning Objects Groups Overview. Online: http://www.learnativity.com/ standards.html Lennox, D. (2001). Managing knowledge with learning objects. A WBT Systems White Paper. Online: http://www.wbtsystems.com/news/whitepapers
http://www.saba.com/english/customers/index.htm For reasons of confidentiality, the organization on which this case study is based cannot be identified; it will be referred to as CEM Corporation throughout the document. Bold text within this section indicates that this is a role fulfilled by the learning management system. Akin to a chief information officer (CIO) or chief knowledge officer (CKO).
APPENDIX A Saba Software Inc. Overview
Founded in 1997, Saba Software Inc. (quoted on the NASDAQ stock exchange as SABAD) is a global company headquartered in Redwood Shores, California, and is a leading provider of Human Capital Development and Management (HCDM) solutions. The Saba vision is to make it possible for every enterprise to manage its human capital by bringing together learning, performance, content and resource management in a holistic, seamless way. To satisfy this vision, Saba offers two key products sets, namely, its “Enterprise Learning Suite” and “Saba Performance”. “Saba Learning” is an Internetbased learning management system within the Enterprise Learning Suite that automates many of the learning processes for both learners and learning providers. Table A(1) lists some of Saba Software’s major customers.
Table A(1): Saba Software Incorporated — Major Customers Business Area High Tech Telecommunications Professional Services Financial and Insurance Services Government Life Science Automotive Transportation Energy Manufacturing and Distribution Consumer Goods and Retail Distribution
Customers Cisco Systems, Cypress, EMC2, Xilinx, i2 Technologies, VERITAS Software ALCTEL, Telecom Italia, CENTEC, Lucent Technologies Kendle International, Deloitte & Touche, EDS, Bearing Point (Formerly KPMG) ABN Amro, Royal & Sun Alliance, Scotiabank, Principal Financial Group, BPM, Standard Chartered, Wells Fargo United States of America Department of the Army, Distributed Learning Services, LearnDirect Scotland Aventis, Novartis, Procter & Gamble, Medtronic Ford Motor Company, General Motors, Daimler Chrysler Continental Airlines, BAA Duke Energy, Energy Australia Caterpillar, Cemex, Grainger. Best Buy, Kinkos
APPENDIX B What are the roles of the LMS in managing learning within the organization? 1.1
Does LMS support training administration? - registration - scheduling - delivery - testing - tracking/reporting of individual learning
Does LMS support diverse learners within diverse learning contexts? - large number of learners - diverse learning contexts - online and offline learning
Does LMS facilitate competence development to meet particular business objectives? - skills specification needed to fulfil particular objective - skills assessment to establish gap in learning - recommended learning to fill identified gap in learning
Does LMS enable cohesive learning throughout the enterprise? - learning development plan for organization - learning plan for individuals in sync with overall learning plan
Does LMS encourage accountability for learning among employees? - self-service learning - self-planning and self-assessment of career development
Does LMS enable monitoring and analysis of “learning condition” within the organization? - overall picture of competencies within the organization - overall picture of learning achieved in organization - overall picture of learning required within the organization
What are the other key roles/attributes of the LMS? - provision of any content authoring - provision of any content management - provision of any knowledge management - synchronization with HR system - provision of post-learning support - adherence to learning content standards - integration of incompatible systems for learning management - support for large range of third-party courseware - other
What is the relationship between the LMS and other IS that support learning? 2.1
What types of learning/training environments are used in EMC and how does LMS incorporate them? - classroom - onsite - offsite - computer-based instruction (video and synchronous training programs) - computer-based training (interactive online training) - multiple media - hyper media - virtual learning environments (instructor led) - other
What knowledge management systems are used in EMC and how does the LMS Incorporate them? - coding and sharing of best practices (e.g., knowledge databases) - corporate knowledge directories and repositories (e.g., data mining, expert systems) - creation of knowledge networks (e.g., electronic bulletin boards, discussion forums) - other
What content management systems are used in EMC and what functionality do they provide to LMS? - provide learning object repository - facilitate content authoring - enable delivery of content - provide content administration - other
Is there any relationship between LMS and other IS that facilitate ad hoc or informal learning? - e-mail - video conferencing - groupware - decision support systems - management information systems - executive information systems - Internet/intranet systems - data warehousing systems - enterprise resource planning systems - customer relationship management systems - other
UMaint is the maintenance department of a large public university (BigU1) in the northwest of the United States. Currently, about 18,000 students are enrolled at BigU, a large proportion of whom reside on-campus. This makes BigU’s main campus one of the largest residential campuses in the Pacific Northwest. In addition to the student body, about 7,000 faculty and staff work on campus. UMaint’s employees are responsible for the maintenance of BigU, the campus area of which encompasses more than 400 buildings and over 1,930 acres of land. In a typical year, UMaint handles approximately 60,000 service calls, and schedules and completes 70,000 preventive maintenance projects for 69,000 pieces of equipment. The primary departments of UMaint are Architectural, Engineering, and Construction Services, Utility Services, Custodial Services, and Maintenance Services. These departments are supported by UMaint’s Administrative Services. Architectural, Engineering, and Construction Services are involved in all new construction projects as well as all modifications to existing facilities. The Utility Services department operates the university’s power plant and is responsible for providing utilities such as steam, electricity, and water. Custodial Services, UMaint’s largest department, handles the custodial work for all buildings and public areas on campus. Maintenance Services is divided into environmental operations, life safety and electronics, plant maintenance and repair, and operations, and is responsible for the upkeep of the university’s buildings and facilities. The Administrative Services department encompasses units such as operational accounting, personnel and payroll, storeroom, plant services (including motor pool, heavy equipment, trucking and waste, and incinerator operations), and management information systems. This department handles all supporting activities needed to coordinate and facilitate UMaint’s primary activities. Overall, more than 450 employees work for UMaint in order to support the university’s operations. Please refer to Appendix A for the organizational chart of UMaint.
selection process (West & Shields, 1998). Specifically, Jack (the director of UMaint) strongly believed that “the selection of a CMMS will affect the way [UMaint does] business for the next 10 years.” In addition to serving UMaint, the CMMS was supposed to serve BigU’s housing department and Central Stores. University Housing would use the system to support all maintenance related aspects of their operations, as well as to manage its warehouse, and Central Stores would use the system for all procurement-related activities. Several other departments played a role in the process as well; for example, the university’s Budget Office allocated the funds for the project and was hence involved in the purchasing process. As the total amount budgeted for the purchase of the CMMS exceeded $2,500, the acquisition had to be made through the university’s Purchasing Department in the form of a bidding process; furthermore, due to the administrative, rather than academic or research related nature of the project and the fact that BigU is a state university, unsuccessful bidders had the option of filing a protest with the state’s Department of Information Services (DIS) after the announcement of the final decision. The DIS had the authority to review and override any decisions made by UMaint. Furthermore, acting as an outside consultant (Piturro, 1999), the university’s IS department provided guidance to UMaint’s IS department during the selection process. Please refer to Appendix D for a diagram displaying how UMaint fits into the university’s structure; Appendix E shows the major stakeholders of the proposed system. Given that upper administration was well aware of the large impact the system would have, everyone agreed to implement and routinize the “best possible alternative” for the proposed system. Due to the limited resources, developing an integrated system inhouse was not seen to be feasible. Therefore, it was decided to purchase a system from an outside vendor. Even though highly customized solutions can be problematic in the long run (Ragowsky & Stern, 1995), both off-the-shelf packages and solutions specifically designed for UMaint by interested vendors were considered.
A Case of Information Systems Pre-Implementation Failure
Since the IS team was in charge of the CMMS project, meetings with line employees were set up in order to provide them with information and solicit feedback. Nevertheless, many employees were unaware of the proposed system’s impact on their work. Generally, they saw “other” departments as being impacted to a greater extent, and did not anticipate their own day-to-day responsibilities to change much. Many employees regarded the CMMS as a tool for the higher echelons of administration. This led to a lack of interest in participating in the decision process, which thereafter translated to the perception that their departments had been left out of the process. For example, an employee mentioned: …I don’t know how it will really affect me, unless they would get a couple of modules that would really help out back here. It’s my understanding that they’re just doing administration modules…. I do know that administration, at the other end of this building, had a lot of input…. I don’t know that it will probably help us at this point. (emphasis added) Antagonism between departments added to such perceptions. As the criteria established for the vendor selection process were viewed as relatively inflexible and determined a priori , many employees saw their area as “covered,” and did not see the point of providing additional input into the decision process, many hoping that the IS department, consisting of acknowledged experts, would select the right system for them. One employee, for example, stated that “our info tech group… they do have the knowledge to make something run right and I don’t.” The information systems department was well aware of its power arising from the myths of their expertise and magic representing systems professionals as “high priests” (Hirschheim & Klein, 1989; Hirschheim & Newman, 1991) to influence decisions of nonIT employees in UMaint. However, it consciously tried to limit the extent of influence IS professionals would have on the decision process: …we didn’t want to influence anyone’s decision. … this isn’t supposed to be about us … this is about their achievement, this is their product in the end, we’re just charged with implementing it. They’re the ones who gonna have to use it every day and live with it. We wanted it to be about them, and that’s why we’ve been so big about having their involvement…. Nevertheless, being in such an influential position, the IS manager noticed the benefits of being able to influence people’s decision, as he was trying to choose the most adequate system for the organization: …in many ways, we put our hands on the wheel. Because we basically had to come to a decision that works. Everybody, they’re just gonna do whatever we say. So we better make sure that what we’re saying is really what is the greatest good for everyone … and that’s one thing that has been very confident for us that we have always been striving for the absolute biggest bang for the buck…. Whatever we could get. The most we could do.
Finally, a decision was made by the Executive Committee based on a number of criteria, which included the weighted scores, reference calls conducted by evaluation team members, and finally, consideration of the budget. Before the final decision was made, an informal vote (that would not influence the final decision) was held. Interestingly, the selection made was not consistent with the results of the vote. The vendor that ranked first in the informal vote was not considered for selection, since its product did not meet the budget criteria. The vendor that ranked second had the newest technology and offered a highly customizable product; however, it was regarded as being too risky, as universities were considered an entirely new market for the vendor and was thus dropped from the selection. The vendor that was finally selected ranked third in the last informal vote. Even though the vendor selected scored very high in terms of meeting the requirements, functions, and features, the decision found only partial support by many organizational members, including the IS department members. The IS group was not at all convinced that the product had necessary technological and functional capabilities. In a vendor review demonstration, an IS department member mentioned that the vendor’s “technology is severely outdated and does not offer any customization for the user.” Other departments also were not satisfied with the selection, with a member of the accounting team stating, “…just the one that got chosen… we wished that it hadn’t.” The storeroom, a very powerful unit of UMaint’s administrative services department, shared the same thoughts: “we were looking at [Vendor A] and [Vendor B]. These were what we thought were the two best, but other factors came into play in the decision making….”
A Case of Information Systems Pre-Implementation Failure
would most likely continue well into the following year, funding for the project was not automatically valid any more. UMaint would have to reapply for funding, and in light of the state’s tight budget situation, getting funds for such a project a second time seemed highly unlikely. As of 2003, no computerized maintenance management system was implemented at UMaint. Frank (the IS manager), reflecting on the sequence of events in the project, still remains perplexed about why the initiative turned out to be a disaster, despite all his and his colleagues’ efforts to consciously manage stakeholder input and thus avoid failure, so he approached an IS academic to find out the reasons for the failure of the current project, how UMaint’s current project could be “salvaged”, and how similar problems could be avoided in future projects.
Ciotti, V. (1988). The request for proposal: Is it just a paper chase? Healthcare Financial Management, 42(6), 48-50. Gustin, C. M., Daugherty, P. J., & Ellinger, A. E. (1997). Supplier selection decisions in systems/software purchases. International Journal of Purchasing and Materials, 33(4), 41-46. Hirschheim, R., & Klein, H. K. (1989). Four paradigms of information systems development. Communications of the ACM, 32(10), 1199-1216. Hirschheim, R., & Newman, M. (1991). Symbolism and information systems development: Myth, metaphor, and magic. Information Systems Research, 2(1), 29-62. Howcroft, D., & Light, B. (2002). A study of user involvement in packaged software selection. In L. Applegate, R. Galliers, & J. I. DeGross (Eds.), Proc. of the 23rd Int. Conf. on Information Systems, Barcelona, Spain, December 15-18 (pp. 69-77). Hwang, M. I., & Thorn, R. G. (1999). The effect of user engagement on system success: A meta-analytical integration of research findings. Information & Management, 35(4), 229-239. Land, F. F. (1982). Tutorial. The Computer Journal, 25(2), 283-285. Lucas Jr., H. C., & Spitler, V. (2000). Implementation in a world of workstations and networks. Information & Management, 38(2), 119-128. Managestar. (n.d.). manageStar — Products. Retrieved May 3, 2004: http:// www.managestar.com/facility_mgmt.html Piturro, M. (1999). How midsize companies are buying ERP. Journal of Accountancy, 188(3), 41-48. Ragowsky, A., & Stern, M. (1995). How to select application software. Journal of Systems Management, 46(5), 50-54. Raouf, A., Ali, Z., & Duffuaa, S. O. (1993). Evaluating a computerized maintenance management system. International Journal of Operations & Production Management, 13(3), 38-48. Schwab, S. F., & Kallman, E. A. (1991). The software selection process can’t always go by the book. Journal of Systems Management, 42(5), 9-17. Weber, C. A., Current, J. R., & Benton, W. C. (1991). Vendor selection criteria and methods. European Journal of Operational Research, 50, 2-18.
West, R., & Shields, M. (1998). Strategic software selection. Management Accounting, August, 3-7.
Names of the university and its divisions have been replaced by pseudonyms. Further, the identities of the employees of the university and other stakeholders of the system have been disguised to ensure confidentiality. For the sake of authenticity, the quotations have not been edited. i.e., technical or infrastructure related requirements such as scalability or technical potential for the future
APPENDIX B Typical Features of a CMMS Work Order Management • Receive and route web-based work requests. • Obtain approvals as part of the workflow if necessary. • Receive alerts on critical issues in your workflow. • View a comprehensive list of work in process work plans, schedules, costs, labor, • •
materials, assets and attached documents. View overdue work, or sort work orders on a place, space, asset or engineer basis. Set-up predetermined workflow processes or create them on the fly, assigning work orders to available personnel. Link related work orders. Send clarifications that are tracked in the message history. Attach documents, including drawings, specs, and more.
• • • Asset Management • Click on an asset to launch a work order. • Maintain all critical asset information. • Do preventive maintenance. • Track and get alerts on asset contracts or leases. • Track assets, costs, histories and failures. • Link assets to a work order, place, space, project or contract. • Drill down in your organization to locate assets. • Get reminders and alerts on any element of the asset. • Attach documents, such as diagrams, to the asset. Inventory Management • Receive minimum/maximum alerts on inventory levels. • Allow employees to request or order products. • Add any products to a service delivery, e.g., a new computer for a new employee set-up. • Kick-off automatic purchase orders to pre-approved vendors. • Track Bill of Materials, SKU, price, stock, description, vendor and transaction information. • Connect inventory with specific budgets to track exact costs. Project Management • Kick-off new projects with a few clicks. • Monitor project schedules and milestones on real-time interactive Gantt charts. • Track budgeted vs. actual spending on a project-by-project basis. • Manage resources by viewing project analyses. • Issue service requests within projects and build into Gantt charts. • Draw relationships between projects, people, places, things, contracts, POs, inventory, vendors and more.
• Attach and share documents. Procurement Management • Route POs and invoices automatically. • Receive approvals and responses automatically. • Set amounts that require approval and workflow automatically obtains it. • Integrate with financials. • Automate all procurement of services, independent contractors, vendors, etc. • Create and broadcast requests for proposals. • Track and compare all out-bound and in-bound proposal and bids. • Negotiate online with vendors. • Attach proposals to people, places or things as well as projects and contracts. • Send proposals out in workflow for review and approval.
A Case of Information Systems Pre-Implementation Failure
APPENDIX G Contents of the Proposal 1. Proposal Contents The sections of the vendor proposal should be as follows: Section 1 Section 2 Section 3 Section 4 Section 5 Section 6
Transmittal Letter (signed paper copy must also be included) Administrative Requirements (see 3) CMMS Requirements, Functions and Features (responses in Attachment x) Technical Requirements and Capabilities (responses in Attachment xx) Vendor Qualifications (responses in Attachment xxx) Cost Proposals (responses in Attachment xxxx)
All vendors must use the RFP templates and format provided on the CMMS website.
2. Section 1 - Transmittal Letter The transmittal letter must be on the vendor's letterhead and signed by a person authorized to make obligations committing the vendor to the proposal. Contact information for the primary contact for this proposal must also be included. 3. Section 2 - Administrative Requirements This section of the proposal must include the following information:
a) A brief (no more than three pages) executive summary of the vendor's proposal including: 1. A high-level overview of your product and the distinguishing characteristics of your proposal. 2. Indicate the number of universities using the system as proposed. "System" is defined as the vendor's current version of the software with all the functionality proposed in the response to this RFP. 3. Describe how closely the proposed system matches UMaint’s needs. 4. Discuss what attributes of your proposal offer BigU a distinguishing long-term vendor relationship. b) A specific statement of commitment to provide local installation for the system. c) A specific statement warranting that for a period of five years after acceptance of the first application software, that all application software will continue to be compatible with the selected hardware and system software, and will be supported by the vendor. This does not include the database software and hardware to be selected by BigU. d) For proposal certification, the vendor must certify in writing: 1. That all vendor proposal terms, including prices, will remain in effect for a minimum of 180 days after the Proposal Due Date. 2. That the proposed software is currently marketed and sold under the vendor’s most current release only (or be added within six months) 3. That all proposed capabilities can be demonstrated by the vendor in their most current release of the system. 4. That all proposed operational software has been in a production environment at three non-vendor owned customer sites for a period of 180 days prior to the Proposal Due Date (except in cases where custom or new functionality is designed for BigU) 5. Acceptance of the State’s Department of Information Services (DIS) Terms and Conditions.
4. Section 3 - CMMS Requirements, Functions, and Features Section 3 responses of this RFP contain mandatory and desired features for specific CMMS application modules. This requires coded responses only. 5. Section 4 - Technical Requirements and Capabilities Section 4 responses of this RFP ask the vendor to identify the computer system software environment for the CMMS, and to provide additional technical information regarding system interfaces, hardware requirements, and application flexibility/enhancements. Answers to these questions should be provided with both narrative and coded responses. 6. Section 5 - Vendor Qualifications Section 5 responses of the RFP requires the vendor to provide information about the vendor organization and customer base; and to propose support capabilities in terms of design, installation, data conversion, training, and maintenance of the proposed system. This information is to be provided in narrative format. 7. Section 6 - Cost Proposal Instructions Section 6 of the RFP contains the formats and instructions for completing the cost proposal. Section 6.2 (Instructions) requests five-year summary costing for: a) Application software (including upgrades and customization). b) Software maintenance. c) Services (installation, training, interfaces, project management, conversions, etc.).
NASA was founded in 1958 to explore space. A year earlier, the Soviet Union beat the United States into space by launching Sputnik, the first satellite. In the United States, this was seen as an embarrassment, and the need for a space program was pressing. Only a few short months after the formation of NASA, the first American space missions were launched. In 1969, NASA’s Apollo 11 mission put the first humans on the moon surface. NASA’s space program has changed the way mankind views the Earth and helped bring about many important scientific findings that have resulted in numerous “spin-offs” in science, technology and commerce. After many other successful manned space flights, the Space Shuttle Program was initiated. The goal was to develop a reusable vehicle for frequent access to and from space. After nine years, the first shuttle, Columbia, was launched from Kennedy Space Center in 1981. Columbia was a remarkable success, though the promise of frequent access to space has never been realized. (Columbia flew 24 missions, the most of any shuttle in the fleet.) Today, NASA is renown for its discoveries and explorations in space — both manned and unmanned. NASA is truly a unique governmental agency with the lofty mission shown in Figure 1. In 1986, the world was shocked and saddened as the Challenger exploded during takeoff. Seven astronauts were dead along with the first civilian to ride the shuttle, Christy McAuliffe, an elementary school teacher. The Rogers Commission, formed by an executive order from President Reagan, found that design flaws contributed to the Challenger’s explosion. During the investigation, it was revealed that NASA engineers and management knew about the problems with the O-rings and failed to act on the information that was available. The report was also critical of safety procedures and Space Shuttle Program management. Sixteen years later, on February 1, 2003, the space shuttle Columbia disappeared. In the control room, contact was lost with the Columbia at 9:00 a.m. Minutes later, the Houston mission control room was locked down, as the team of ground support realized a disaster was occurring. By 2:05, President Bush addressed the public, “Columbia’s lost; there are no survivors.” The Columbia had disintegrated when it reentered the Earth’s atmosphere. Seven astronauts were dead, including the first Israeli astronaut and an Indian astronaut, who immigrated to the United States. The sadness of the national disaster deepened as pieces of the Columbia shuttle began to turn up in Texas and Louisiana. A NASA internal investigation was conducted. In the wake of 9/11, theories of a terrorist attack surfaced and were quickly dispelled. The theory that a piece of foam may have damaged the wing was proposed. It was quickly dismissed too, as not being possible for the foam to cause such a catastrophic failure. As the pieces of Columbia were collected and the shuttle was reassembled, it was determined that a large piece of insulation foam broke off during launch, and hit Columbia’s left wing at a velocity of 500 Figure 1: NASA’s Vision & Mission N A S A V isio n T o im p ro v e life h ere, T o ex ten d life to th ere, T o fin d life b eyo nd .
N A S A M issio n “T o u n derstan d and p ro tect o ur h o m e p lan et, T o ex plore th e u n iv erse and search for life, T o in sp ire th e n ext g en eration o f ex p lo rers … as o n ly N A SA can .”
mph. On reentry, the heat was too great for the damaged shuttle, causing it to disintegrate. Disturbingly, the foam was discussed at NASA internal mission meetings, but only in passing. An external review board of 10 people, led by Retired Admiral Hal Gehman, was appointed to investigate what happened at NASA that could have prevented the Columbia tragedy. In August of 2003, after conducting over 230 interviews with NASA personnel, from mechanics to astronauts, a 243-page report summarized the findings. Excerpts from the Columbia Accident Investigation Board (CAIB) report are highly critical of NASA management and the culture of the agency: “Based on NASA’s history of ignoring external recommendations, or making improvements that atrophy with time, the Board has no confidence that the space shuttle can be safely operated for more than a few years based solely on renewed postaccident vigilance.” “Unless NASA takes strong action to change its management culture to enhance safety margins in shuttle operations, we have no confidence that other ‘corrective actions’ will improve the safety of shuttle operations. The changes we recommend will be difficult to accomplish — and they will be internally resisted.” (CAIB Report, Vol. 1, p. 13) Changes that NASA has made include the removal of more than 12 people from upper management into different positions. At the onset of the disaster, several critics of NASA wanted to hold someone accountable for the management mistakes that led up to the disaster, including Sean O’Keefe, NASA’s president. NASA now has the task of reviewing and deciding what to do with the external investigation report.
both Challenger and Columbia investigations), docking with the Russian Mir Space Station and the return to space of astronaut, now Senator John Glenn. In 1986, after the Challenger exploded, an investigation of the disaster by the Rogers Commission identified technical deficiencies that led to the explosion — the O-rings between portions of the right solid rocket motor failed. Surprisingly, engineers were aware of the potential problem O-rings, though the decision to launch was still made. There were systems used to track anomalies. However, as stated in the Roger’s Report: “NASA’s system for tracking anomalies for Flight Readiness Reviews failed in that, despite a history of persistent O-ring erosion and blow-by, flight was still permitted. It failed again in the strange sequence of six consecutive launch constraint waivers prior to 51-L, permitting it to fly without any record of a waiver, or even of an explicit constraint. Tracking and continuing only anomalies that are ‘outside the data base’ of prior flight allowed major problems to be removed from and lost by the reporting system. ” (Roger’s Commission, p. 148) The Rogers Commission also found several organizational problems that led to the decision to launch Challenger under dangerous conditions. A tremendous amount of planning takes place prior to a shuttle launch. In addition to planning the payload and planning for the scientific mission, several other activities take place. A delay in launching a shuttle disrupts the schedule for all the missions that follow. There are many stakeholders, including astronauts, NASA engineers and administrators, as well as engineers and managers form subcontracting organizations. During the Rogers Commission investigation, the structure of the organization was reported to be complex and non-inclusive of engineering and subcontractor viewpoints. An example of this is that Rockwell Inc. and Thiokol expressed reservations at the flight readiness meeting before the decision to launch. The report indicates in several instances that Marshall Space Center management pressured and influenced subcontractors to approve the launch decision, even though their support for launch was ambiguous at best. The major findings of the Rogers Commission, with excerpts from their final report, were: Schedule Pressure/ Budget Pressure
“The pressure on NASA to achieve planned flight rates was so pervasive that it undoubtedly adversely affected attitudes regarding safety.” “Operating pressures were causing an increase in unsafe practices.” “Schedule pressures played an active role in the decision to launch the Challenger. Most emphasis in the program was placed on cost-cutting and meeting schedule requirements, rather than flight safety.”
Communication and Management Problems
The Commission stated that a ‘serious flaw’ existed in the decision-making processes at NASA. The launch decision did not take into account or understand the concerns raised by the Thiokol engineers and some Marshall engineers. Further, waiving of launch constraints seemed to occur with no checks at necessary levels of management.
The Columbia Disaster Communication and Management Problems
“The Commission is troubled by what appears to be a propensity of management at Marshall to contain potentially serious problems and to attempt to resolve them internally rather than communicate them forward. This tendency is altogether at odds with the need for Marshall to function as part of a system working toward successful flight missions, interfacing and communicating with the other parts of the system that work to the same end.”
Silent Safety Program
During lengthy testimony, NASA’s safety staff was never mentioned. No safety personnel were present at the Mission Management team meetings or were part of the command structure for launch decisions. Additional problems of safety staff and requirements being reduced were reported. The organizational structure of Kennedy and Marshall space centers had safety and quality offices under the supervision of the organizations that they were supposed to check. This was a conflict of interest. Usually, organizations have safety and quality control departments report directly to senior management, independent from the pressures of individual programs.
Recommendations made by the Rogers Commission were both technical and managerial. The technical recommendations included: (a) redesigning the rocket boosters, (b) upgrading the shuttle tires and breaks, and (c) retrofitting the shuttles to include escape systems. Some of the managerial recommendations were: • • • • • •
Create a strict risk-reduction program. Reorganize (decentralize) so that information is made available to all levels of management. Astronauts should be placed in NASA management positions so that their viewpoints are represented. All waivers to flight safety should be revoked and forbidden. All contractors need to agree to launch. Technical issues should be reviewed by independent government agencies, which report their analysis to NASA. The NASA Associate Administrator for Space Flight and the NASA Associate Administrator for Safety sponsor open reviews, encouraging NASA and contractor management, engineering and safety personnel to discuss concerns. Allowance be made for anonymous reporting of shuttle safety concerns.
In response, NASA implemented the recommendations of the Rogers Commission and adopted a more realistic launch schedule. However, over time, it is difficult to assess what changes were fully adopted.
Culture in Government Agencies
Throughout the discussions of NASA management, the topic of organizational culture is prevalent. “Every organization has a culture, that is, a persistent, patterned way of thinking about the central tasks of human relationships within an organization. Culture is to an organization what personality is to an individual. Like human culture generally, it is passed on from one generation to the next. It changes slowly if at all” (Wilson, 1989). Adoption of information systems or changes to processes created by introduction of
information systems are frequently doomed because the organization’s culture is unwilling to accept change. NASA was formed to re-establish US dominance in space and science. Drawing from existing organizations and top research scientists, NASA became a high-performance organization (McCurd, 1993). In an amazingly short period of time, NASA was able to achieve the Apollo missions and become a source of national pride and prestige. The culture at the time, described by Vaughan (1996), was that of engineering. The very word ‘engineer’ implies that something is being created and tested in order that it be engineered again and improved. NASA’s early culture of scientists and engineers relied on testing, research and rigorous methodology to find out what worked in achieving manned space flight. The culture believe strongly that an in-house technical ability was necessary, that people hired by NASA were the best and brightest in the world, and that risk and failure were part of doing business. Failure was tolerated because the view was that you cannot achieve success without innovation and experimentation. McCurd points out that high-performance cultures tend to be unstable and short lived. As the organization grew, and the amount of work increased, the culture underwent a significant change, becoming more bureaucratic. Instead of keeping all work ‘in-house’, NASA began to use outside contractors to do research and development. The culture became more averse towards failure and innovation. Employees began to feel that their failures would not be tolerated. This was reflected in a NASA culture survey in 1988 (McCurdy, 1993) that showed employees were dissatisfied with the work going to outside contractors and were finding it difficult to get things done in a large bureaucracy. They felt that a loss of technical knowledge was occurring and that failure was no longer tolerated.
Information Systems Development for Government Agencies
Columbia’s final mission, STS-107, was to perform experiments related to physical, life and space sciences. The seven astronauts conducted more than 80 experiments while in orbit. This mission was an extended orbit, lasting 16 days. The crew was noted in the media because it had the first Israeli astronaut, Payload Specialist Ilan Ramon, quickly hailed as a national hero of Israel. See Appendix A-3 for a listing of Columbia’s crew and the STS-107 Mission. The mission goals are available in Appendix A-4. When Columbia launched on January 16, 2003, cameras caught images of insulation foam breaking loose from the fuel tank. Columbia entered its orbit over Earth without incident and conducted its 16-day mission. On February 1, 2003, while returning to Earth, Columbia lost communication with Johnson Space Center. After realizing that a disaster has occurred, Flight Director Leroy Cain order the communication center locked down and the contingency plan order is implemented. As the news reports begin, people from Texas to California report seeing the shuttle break up as it entered the atmosphere. Minimal talk of rescue fades as President Bush addresses the nation, “ The Columbia’s lost. There are no survivors.” People begin finding debris from the shuttle in Texas and Louisiana. NASA issues a warning that the debris might be hazardous and that people should report any finding to NASA, and not touch the debris. NASA’s top administrator, Sean O’Keefe, vows to investigate what went wrong. The Columbia Accident Review Board (CAIB) is formed, chaired by retired Navy Admiral Harold Gehman, Jr. Members of the review board are listed in Appendix A-5. The possibility of a terrorist strike is offered and quickly rejected by NASA. As more is discovered about the Columbia disaster, the foam debris that broke loose on take-off is mentioned. However, Shuttle Program Manager Ron Dittemore says this is highly unlikely. A few days later, he retracts these statements. As the CAIB investigates, some surprising information about the NASA and the shuttle program comes to light. • • • •
NASA had known about the foam debris-shedding problem for some time. However, since it had never caused a problem before, it became routinely ignored. NASA had the opportunity to obtain images of Columbia in orbit on the day of the disaster, but declined, feeling that this was unnecessary. Budget restrictions led to demoralizing attitudes on the shuttle safety program. For example, NASA inspectors were required by management to supply their own tools and were restricted from making spot checks. Lower level shuttle personnel felt that they could not raise issues of quality and safety without risk of being fired.
tions. The report indicates that technical explanations were not enough to explain the Columbia disaster and is critical of NASA management: “In our view, the NASA organizational culture had as much to do with this accident as the foam. Organizational culture refers to the basic values, norms, beliefs, and practices that characterize the functioning of an institution. At the most basic level, organizational culture defines the assumptions that employees make as they carry out their work. It is a powerful force that can persist through reorganizations and the change of key personnel. It can be a positive or a negative force.” (CAIB Final Report, Part II, p. 1)
A Matter of National Pride
When Sputnik, the first satellite in space, was launched by the Soviet Union in 1957, Americans were stunned. Their Soviet rivals had beaten them into space. NASA was formed a year later and in 1961 rallied the American people to be the first to the moon. This was an unimaginable, engineering feat, and in 1969 NASA succeeded when Neil Armstrong became the first person to walk on the moon. NASA’s culture was defined through the spirit of exploration and achievement of the impossible. Those who witnessed the 1969 space walk clearly remember where they were and the awe inspired by the greatness of science and engineering achievement. Today, the shuttle program has made space flight seem common. Children do not remember a time when this was only a dream. Over time, the emphasis for space exploration has gone from achievement of the impossible to cost savings and streamlined operations. Figure 2 shows the NASA budget as a percentage of federal funding. A dramatic decline is evident in the early ’70s. Figure 3 shows the relatively flat spending over recent years on the shuttle and international space station programs. The CAIB contends that the NASA culture clashes with the shuttle philosophy of cheaper, frequent access to space. The Board goes further to state that the Rogers Commission recommendations made after the Challenger disaster were never fully realized.
DELICATE BALANCE OF RISK, BUDGET, & SCHEDULE IN SPACE EXPLORATION
Figure 2: NASA Budget as a Percentage of the Federal Budget from the CAIB Final Report
Figure 3: Space Shuttle Program Spending in Recent Years (in Millions of Dollars) Space Station Space Shuttle
FY 1998 2501.3
FY 1999 2304.7
FY 2000 2323.1
FY 2001 2087.4
FY 2002 1721.7
FY 2003 1492.1
Goldin’s greatest contribution to NASA was possibly the creation of the International Space Station. Sean O’Keefe replaced Goldin in 2001. At this time, the current debate was over privatization of NASA. The rationale was that privatizing portions of the shuttle program could save money and might facilitate more commercial uses of the shuttle. Parallel to this push was the effort to complete Node 2 of the International Space Station. The scheduled date, viewed widely as unrealistic, for deploying Node 2 of the International Space Station in February 2004 was programmed into a screen saver countdown and sent to all shuttle program managers. The CAIB made several recommendations, including a redesign of the thermal tank protection subsystem to eliminate debris shedding and several other technical enhancements. They also expressed the need for obtaining better images of the shuttle, on-board and from external sources. The need to perform emergency repairs of the shuttle from space was identified, along with the need to return to the industry standard for foreign object debris, which had been waived for the shuttle. The managerial recommendations were: • •
Ease scheduling pressure to be consistent with resources. Expand Mission Management Team crew and vehicle safety contingencies.
Establish independent Technical Engineering Authority that oversees technical standards, risk, waivers, and verifies launch readiness. Reorganize the Space Shuttle Integration Office so integration of all elements of the program is possible.
Chapter 7 of the CAIB is dedicated to the organizational causes of the Columbia disaster. Normal Accident theory was used to describe the culture of NASA. Namely, in a complex, noisy organization, management actions can increase noise levels to a point where communication is ineffective. High Reliability Theory contends that organizations are closed systems where management is characterized by an emphasis on safety, redundant systems are seen as necessary, not costly and the organizational culture is reliability driven. The problems highlighted in the report were: • •
A lack of Commitment to a Culture of Safety: “... reactive, complacent and dominated by unjustified optimism.” Lack of Adequate Communication: Specifically, managers in charge were resistant to new information indicating what they did not want to hear. Additionally, the databases in place to support decision and data dissemination processes were difficult to use. Oversimplification: Foam strikes had occurred over 22 years and were viewed as a maintenance issue, not a safety problem.
Similarities between the organizational culture of the Challenger and Columbia disasters are striking.
PROBLEMS WITH INFORMATION TECHNOLOGY
As with the Challenger disaster, Columbia also had communication problems related to information technology. During the Challenger disaster, it became apparent that several memos were written indicating the O-ring problem was seen by Thiokol and NASA engineers as being dangerous. The technical detail in the memos was considered difficult to understand and focused on technical detail over risk. Safety systems in place during the Challenger disaster were lax, allowing for waivers, and failing to track waivers and anomalies across flights. Several analyses of the Columbia disaster point to communication through PowerPoint and difficult-to-use databases as contributors to the problems with the shuttle safety program.
PowerPoint Given the technical nature of the Columbia disaster and all the complexity, technical and cultural, related to the launch decisions, reports from the CAIB related to PowerPoint presentations were quite remarkable:
“As information gets passed up an organizational hierarchy, from people who do analysis to mid-level managers to high-level leadership, key explanations and supporting information is filtered out. In this context, it is easy to understand how a senior manager might read this PowerPoint slide and not realize that it addresses a life-threatening situation.” “At many points during its investigation, the board was surprised to receive similar presentation slides from NASA officials in place of technical reports. The Board views the endemic use of PowerPoint briefing slides instead of technical papers as an illustration of the problematic methods of technical communication at NASA.’’ (CAIB report, Vol. 1, p. 191) It seems that overuse of PowerPoint briefings, in place of detailed analysis, made it difficult for meeting attendees to identify what the launch risks were for Columbia. Edward Tufte, a highly prominent Yale professor and an expert in the visual display of data, gave a sample slide in the New York Times (Schwartz, 2003), showing how misleading and vague it was in conveying the risk of the foam strike. In his short book, The Cognitive Style of Power Point (Tufte, 2003) criticizing Power Point and its use in organizations, Tufte gives examples of the communication failures of NASA’s presentation. For example, a slide title states, “Review of Test Data Indicates Conservatism for the Tile Penetration.” This could be construed to indicate no risk for foam tiles. However, the title applied to conservatism about the choice of models used for prediction. Only at the bottom of the slide, in a lower level bullet, is the important information related to the audience. Namely, “Flight condition is significantly outside the test database.” Tufte goes further to say that the low resolution of the slide (presumably to condense information) and the use of condensed, non-specific phrasing adds to the ambiguity of the communication. Note: In Visual Explanations (1997), Tufte did an analysis of the Challenger disaster, showing 13 view graphs prepared for management and faxed to NASA. The critique stated the charts were unconvincing and non-explicit in stating the impact of temperature on O-rings.
Information Systems Used to Support Safety
The CAIB findings indicated critical problems with the information systems used to support shuttle safety: “The information systems supporting the shuttle — intended to be tools for decision making — are extremely cumbersome and difficult to use at any level. While tools were in place to support safety decision making, the design and use was difficult, causing them to fall into disuse.” In 1981, the PC was invented. The operating system, Disk Operating System (DOS), was command line driven and difficult to use. Windows 1.0 was released in 1985, but was viewed as slow and still difficult to use. In 1990, Windows 3.0 was released and graphical user interfaces (GUIs) gained massive popularity. GUIs were easy to use because the user
did not have to remember any command names or sequences to operate the computer. The desktop analogy interface was intuitive to experiences users already had. Home use of PCs was rapidly rising. Consequently, customized, expensive systems developed for governmental agencies seemed archaic. A GUI design for safety that was easier to use and interpret, and that did not allow users to bypass safety features, may have led to a more informed decision. •
Another system existed that was easier to use, but not required. “The Lessons Learned Information System database is a much simpler system to use, and it can assist with hazard identification and risk assessment. However, personnel familiar with the Lessons Learned Information System indicate that design engineers and mission assurance personnel use it only on an ad hoc basis, thereby limiting its utility.”
Given a simple system was available, it was surprising that it was not organizational policy to use it. User training may be an issue here as well. • •
The simulation tool called Crater (mentioned in the slide in the above section) was inadequate to analyze foam impact data. The CAIB also indicated that the decentralized manner with which the shuttle program operated could hide unsafe conditions, whereas a centralized way to handle safety issues would foster better communication and insight.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION Can NASA Change?
Soon after the CAIB report was submitted, Brigadier General (and CAIB member) Duane Deal submitted a supplement to the CAIB report. Duane felt strongly that enough hadn’t been said about preventing “The Next Accident.” The language of the CAIB report was not strong enough in specific areas in its criticism of NASA management for letting schedules and budgets take precedence over safety. Brigadier Deal expressed strong reservations about NASA’s ability to change: “History shows that NASA often ignores strong recommendations; without a culture change, it is overly optimistic to believe NASA will tackle something relegated to an ‘observation’ when it has a record of ignoring recommendations.”
skills. Over a transition period, Parsons continues to learn more about the shuttle program from the retiring Ron Dittemore. Now he has the daunting task of changing NASA’s culture. His first major assignment is to ensure that personnel at all levels feel comfortable voicing concerns about shuttle safety. In a recent quote to the Associated Press, Parsons claims, “None of this is too touchy-feely for me.” He even went so far as to hire a colleague whose assignment among others is to critique his interactions with subordinates to ensure he is not intimidating. Even so, NASA veterans are reluctant to adopt a more humanistic style of management. Will changes at the top level by Parsons be enough to successfully change NASA’s culture?
In this case, what is missing, is quite informative. Information technology offers advantages of being distributed or centralized, accessible, user friendly and can track anomalies, extending human abilities to see trends that they might otherwise miss. Failing to use a tool that can help you is akin to burying one’s head in the sand. Yet, we see it every day when users press the NEXT button or the OK button without reading the alert message on their personal computers. Designers have set up clever ways for technology to help us, but it seems like human nature to ignore that help. Ironically, many military systems are designed specifically with a ‘man-in-the-loop’ to avoid catastrophic errors. After the Challenger disaster, the structure of the shuttle program was changed so that astronauts were managers and present at decision-making meetings. A proper safety system that did not allow people, in-the-loop, to bypass anomalies would have served the Columbia shuttle better. PowerPoint is part of business culture. At NASA, as with many organizations, PowerPoint presentations are interwoven with the structure and communication processes that support decision making. It is often said that “the devil is in the details.” PowerPoint is not a tool that is made to support a great deal of detail. Detailed information on a slide becomes an eye-chart, impossible to read. Lengthy reports with a lot of detail can unintentionally hide information by failing to display it in a way that catches the readers attention and overloading them with massive amounts of detail. Is reliance on Power Point to communicate ideas making NASA incapable of distinguishing key factors for decision making?
NASA’s Changing Goals
In January 2004, US President George Bush addressed the public, outlining the new plan for Space Exploration. In his speech, he gave three goals: 1. 2. 3.
Complete the International Space Station: This will be the primary mission of the Space Shuttle. The Space Shuttle will be retired in 2010. Develop and test a new spacecraft by 2008: This will be the replacement for the shuttle and it will operate to transfer astronauts to and from the space station. The spacecraft will also carry astronauts beyond the earth’s orbit to ‘other worlds.’ Return to the moon by 2020: Explore the moon robotically by no later than 2008.
President Bush also spoke about NASA’s current $86 billion budget, vowing to reallocate $11 billion within the budget and to increase NASA’s budget by $1 billion over the next five years. Bush also expressed his support and confidence that Sean O’Keefe could usher NASA into a new age of space exploration. In light of the small increases in funding, one NASA watcher, Douglas Osheroff, a member of the CAIB and a Stanford University physics professor in an interview with the Seattle Times was skeptical about Bush’s support for the future of the space program: “If you give them a goal and you don’t give them resources, I think the situation will get worse.” The question remains: How will O’Keefe find the resources to meet Bush’s goals and assure NASA is an organization with a culture of safety?
In early 2004, NASA’s Jet Propulsion Lab (JPL) in Pasadena, California, successfully landed two ruggedized rovers on Mars. The rovers, Spirit and Opportunity, are sending remarkable images of the Martian surface to Earth. A poignant memorial, a plaque with the names of the seven, lost Columbia astronauts has been placed at Spirit’s landing site. The site has been named the Columbia Memorial Station. Opportunity’s landing site will be named for Challenger’s final crew.
Harwood, W. (2003, July 13). Shuttle Safety Team was Hamstrung. Retrieved January 29, 2004: http://www.cbsnews.com/stories/2003/07/22/tech/printable564420.shtml Deal, D. (2003, October). Supplement to the Report of the CAIB. Retrieved January 29, 2004: http://www.caib.us/news/report/pdf/vol2/part00a.pdf McCurd, H. E. (1993). Inside NASA: High Technology and Organizational Change in the US Space Program. Baltimore, MD: Johns Hopkins University Press (pp. 159174). Mission control transcript of Columbia’s final minutes. (n.d.). Retrieved January 29, 2004: http://datamanos2.com/columbia/transcript.html NASA Budget Reports (1998-2003). Summary of the President’s FY budget request for NASA. Retrieved January 29, 2004: http://www.nasa.gov/audience/formedia/features/MP_Budget_Previous.html NASA Official Columbia Page. (n.d.) Retrieved January 29, 2004: http://www.nasa.gov/ columbia/home/index.html Rogers Commission (1986, Feb 3). Report of the Presidential Commission on the Space Shuttle Challenger Accident. Retrieved January 29, 2004: http:// science.ksc.nasa.gov/shuttle/missions/51-l/docs/rogers-commission/table-ofcontents.html Schwartz, J. (2003). The level of discourse continues to slide. The New York Times, September, 2003. Tufte, E. R. (1997). Visual explanations: Images and quantities, evidence and narrative. Graphics Press, 38-53. Tufte, E. R. (2003). The cognitive style of PowerPoint. Graphics Press, 7-11. Vaughan, D. (1996). The challenger launch decision: Risky technology, culture and deviance at NASA. University of Chicago Press, Chicago, pp. 77-119. Wilson, J.Q. (1989). Bureaucracy: What government agencies do and why they do it. Basic Books: New York.
Appendix A-3: Columbia Mission STS-7 Crew Profiles from the NASA Columbia (http://www.nasa.gov/columbia/crew/index.html) Rick D. Husband, Commander William C. McCool, Pilot Michael P. Anderson, Payload Commander David M. Brown, Mission Specialist 1 Kalpana Chawla, Mission Specialist 2 Laurel Blair Salton Clark, Mission Specialist 4 Ilan Ramon, Payload Specialist 1
Rick Husband, 45, a colonel in the U.S. Air Force, was a test pilot and veteran of one spaceflight. Selected by NASA in December 1994, Husband logged more than 235 hours in space. William C. McCool, 41, a commander in the U.S. Navy, was a former test pilot. Selected by NASA in April 1996, McCool was making his first spaceflight. Michael P. Anderson, 43, a lieutenant colonel in the U.S. Air Force, was a former instructor pilot and tactical officer. Anderson logged over 211 hours in space. David M. Brown, 46, a captain in the U.S. Navy, was a naval aviator and flight surgeon. Selected by NASA in April 1996, Brown was making his first spaceflight Kalpana Chawla, 41, was an aerospace engineer and an FAA Certified Flight Instructor. Selected by NASA in December 1994, Chawla logged more than 376 hours in space. Laurel Clark, 41, was a commander (captain-select) in the U.S. Navy and a naval flight surgeon. Selected by NASA in April 1996, Clark was making her first spaceflight. Ilan Ramon, 48, a colonel in the Israeli Air Force, was a fighter pilot who was the only payload specialist on STS-107. Approved by NASA in 1998, he was making his first spaceflight.
Appendix A-4: STS-107 Mission Overview from Columbia (http://www.nasa.gov/columbia/mission/index.html) STS-107 Mission Summary STS-107 Flight: January 16-February 1, 2003 Crew: Commander Rick D. Husband (second flight), Pilot William C. McCool (first flight), Payload Specialist Michael P. Anderson (second flight), Mission Specialist Kalpana Chawla (second flight), Mission Specialist David M. Brown (first flight), Mission Specialist Laurel B. Clark (first flight), Payload Specialist Ilan Ramon, Israel (first flight) Payload: First flight of SPACEHAB Research Double Module; Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR); first Extended Duration Orbiter (EDO) mission since STS-90. This 16-day mission was dedicated to research in physical, life, and space sciences, conducted in approximately 80 separate experiments, comprised of hundreds of samples and test points. The seven astronauts worked 24 hours a day, in two alternating shifts. First flight: April 12-14, 1981 (Crew John W. Young and Robert Crippen) 28 flights 1981-2003. Most recent flight: STS-109, March 1-12, 2002 Hubble Space Telescope Servicing Mission Other notable missions: STS 1 through 5, 1981-1982 first flight of European Space Agency built Spacelab. STS-50, June 25-July 9, 1992, first extended-duration Space Shuttle mission. STS-93, July 1999 placement in orbit of Chandra XRay Observatory. Past mission anomaly: STS-83, April 4-8, 1997. Mission was cut short by Shuttle managers due to a problem with fuel cell No. 2, which displayed evidence of internal voltage degradation after the launch.
Appendix A-5: Columbia Accident Review Board Members from CAIB Web Site (http://www.caib.us/board_members/default.html) Chairman of the Board Admiral Hal Gehman, USN Board Members Rear Admiral Stephen Turcotte, Commander, Naval Safety Center Maj. General John Barry, Director, Plans and Programs, Headquarters Air Force Materiel Command Maj. General Kenneth W. Hess, Commander, Air Force Safety Center Dr. James N. Hallock, Chief, Aviation Safety Division, Department of Transportation, Volpe Center Mr. Steven B. Wallace, Director of Accident Investigation, Federal Aviation Administration Brig. General Duane Deal, Commander, 21st Space Wing, USAF Mr. Scott Hubbard, Director, NASA Ames Research Center Mr. Roger E. Tetrault, Retired Chairman, McDermott International, Inc. Dr. Sheila Widnall, Professor of Aeronautics and Astronautics and Engineering Systems, MIT Dr. Douglas D. Osheroff, Professor of Physics and Applied Physics, Stanford University Dr. Sally Ride, Professor of Space Science, University of California at San Diego Dr. John Logsdon, Director of the Space Policy Institute, George Washington University Board Support Standing Support Personnel Reporting to the Board Ex-Officio Member: Lt. Col. Michael J. Bloomfield, NASA Chief Astronaut Instructor Executive Secretary: Mr. Theron Bradley, Jr., NASA Chief Engineer
-Sally Ride was also on the Rogers Commission -The review board make up was changed twice to ensure proper representations from different constituencies.
New Forms of Collaboration & Information Sharing in Grocery Retailing: The PCSO Pilot at Veropoulos Katerina Pramatari, Athens University of Economics & Business, Greece Georgios I. Doukidis, Athens University of Economics & Business, Greece
New Forms of Collaboration & Information Sharing in Grocery Retailing
The pilot involving the retailer and the three suppliers, facilitated by the service provider, was initiated based on the concept of sharing the daily sales data (POS data) and other information between retailer and suppliers over an Internet-based collaboration platform. This concept, referred to as Process of Collaborative Store Ordering (PCSO), can be considered as a new form of supply-chain collaboration in the grocery retail sector (Pramatari et al., 2002). The top management of the four companies committed to this project after being presented with the PCSO concept by the service provider, and with the objective to decrease the level of out-of-shelf in the retailer’s stores. On-shelf availability is a critical issue for both manufacturers and retailers today because it improves consumer value, builds consumer loyalty to the brand and shopper loyalty to the store, increases sales and — most importantly — boosts category profitability (Roland Berger, 2002). However, the advances in supply chain management, the initiatives of Efficient Consumer Response (ECR) and category management (Dhar et al., 2001), and the investments in inventory-tracking technology have not — by and large — reduced the overall level of out-of-stocks on store shelves (Gruen et al., 2002), referred to as “out-of-shelf” (OOS). A number of prior studies (Schary and Christopher, 1979; Straughn, 1991) have examined how product unavailability (via a temporary out-of-shelf) influences sales for a given product (SKU). Bell and Fitzsimons (2000) have studied the impact of OOS on category sales, while other studies have analyzed the possible consumer reactions to OOS from a marketing and retail management perspective (Campo et al., 2002, 2000; Fitzsimons, 2000; Verbeke et al., 1998). But what are the causes behind the OOS problem? These are classified into the following areas (Gruen et al., 2002; Vuyk, 2003): a. b. c.
Retail store shelving and replenishment practices, in which the product is at the store but not on the shelf. This category comprises all reasons relating to shelfspace allocation, shelf-replenishment frequencies, store personnel capacity, etc. Retail store ordering and forecasting causes, i.e., the product was not ordered or the ordered quantity was not enough to meet the actual consumer demand. Combined upstream causes, referring to the fact that the product was not delivered due to out-of-stock situations or other problems with the retailer’s distribution center (for centralized deliveries) or the supplier (for direct-store-deliveries).
The first area includes pure out-of-shelf situations, i.e., situations where the product exists in the store but not on the shelf, whereas the last two are out-of-stock situations. The analysis by Gruen et al. (2002), which is a compilation of several global studies, shows that 70-75% of out-of-shelf situations are a direct result of retail store practices, with 47% of the cases attributed to wrong store ordering and forecasting, and 25% to cases where the product was in the store but not on the shelf (Figure 1). In the following we present the organizations that participated in this case, their incentives in doing so and attitude towards the problem of out-of-shelf.
the VMI/CRP model (Cooke, 1998), which is a technique where the supplier has the sole responsibility for managing the customer’s inventory policy, including the replenishment process. In this way, the two suppliers had greatly streamlined the replenishment process in the retailer’s central warehouse, achieving great logistics efficiencies, but the positive effect of this had not been brought down to the store. Because of the large number of products in their catalogues and their continuous search for new areas of efficiency, the two centralized suppliers were concerned with the efficiency of the store replenishment processes and the level of out-of-shelf for their products. They were further excited by the idea of having access to the daily POS data, as this was an important piece of information they didn’t have until then from any other source and would like to understand the potential usage of by participating in a pilot with the retailer.
The Service Provider
The service provider was a new company acting as an intermediary between supermarkets and their suppliers, supporting their business transactions and exchange of information via its electronic marketplace. The provider would build and operate the Internet-based platform to support PCSO, based on the concept and requirements that would be developed in collaboration with the four companies above. More specifically, PCSO was a new idea and concept upon which the provider wanted to develop and establish its business plan. The company would then offer its services to both the retailer and the three suppliers following the Application Service Provider (ASP) model. According to this model, ASPs offer and manage outsourcing application services to many organizations via the Internet, while organizations outsource applications to ASPs to reduce upgrade and maintenance costs and to focus their efforts on core competencies (Soliman et al., 2003).
New Forms of Collaboration & Information Sharing in Grocery Retailing
Figure 3: Relation Between Retailer-SKU, Product EAN Code and Supplier-SKU Example Retailer SKU
EAN-1 Supplier SKU
Nutella 200gr Batch-1
Nutella 200gr Batch-2
Nutella 200gr 10% off
Nutella 200gr +10gr
In regard to the store ordering process, the stores used to follow two different processes for ordering to the central warehouse: (a)
Smaller stores with frequent replenishments were equipped with hand-scanners aimed to support the ordering process. The store personnel responsible for ordering used the hand-scanner to perform a check around the shelves and scan the products to be included in the order. At the end of the process, the order was downloaded from the hand-scanner to the store’s computer and was electronically sent to the central warehouse. Larger stores used a printout of the store’s product assortment as a guide while they did the check around the shelves and the store’s back-room. Products and quantities to be ordered were marked on paper. At the end, the order was typed in the respective store system and was electronically sent to the central warehouse.
Overall, the objective of piloting PCSO in practice was threefold: (1) (2) (3)
To investigate the feasibility of timely sharing big amounts of large on a daily basis, and support critical business processes and supplier-retailer collaboration over an Internet-based platform. To understand the practical implications of this new process for retailers and suppliers, as well as the incentives and barriers for its implementation. To measure the impact of this new practice on order accuracy and ultimately shelf availability.
New Forms of Collaboration & Information Sharing in Grocery Retailing
Figure 4: Information Sharing Between Retailer & Supplier in PCSO Store Manager
POS data Store assortment Store promotion activities Stock _____________ Product marketing activities Market knowledge …
POS data Store assortment Store promotion activities Stock _____________ Store knowledge Competition performance in the store …
the information on product sales, promotions, stock, etc. The submitted order is automatically sent to the platform and then forwarded either directly to the supplier or to the retailer’s central warehouse. Automatic order-generation tools are also in place to help both the salesman and the store manager identify the right products that need to be replenished on a daily basis. Figure 4 gives an overview of the information that the store managers and the supplier salesmen share for the products and stores they have in common and the unique information each of them has.
The PCSO Pilot
The project started for the Retailer when the top management was presented with the PCSO concept and committed to the idea. Supplier A was the first to commit while Suppliers B and C joined in afterwards, finding the idea interesting and willing to pilot with it. Figure 5 gives a schematic representation of the context in which the PCSO concept was piloted. Five representative stores of the retailer were selected to take part in the pilot. A key role from the retailer’s organization, the chief buyer, assumed active responsibility of the project internally in the retailer.
The implementation for the pilot started in Spring 2001, with the definition of the requirements for the Internet-based collaboration platform. Because the project was initiated and managed by the service provider, the objective of the requirementsgathering phase was mainly to understand how to customize an electronic procurement platform to work in the context of grocery retailing, by enhancing it with the data (e.g., daily POS data, store assortment data, etc.) and the functionality (e.g., order suggestions) that are required to support work processes in this context. Based on the PCSO concept presented above, a demo of the system was initially built which was shared with people from the retailer’s and the suppliers’ organizations. From the retailer’s side, the people involved in the requirements meetings were from the IT, the central warehouse, the buying department and the stores’ management. The actual users from the stores were not actively involved in the requirements gathering process from the beginning, but only at the latter stages of the process. On the suppliers’ end, interdepartmental project teams participated in the requirements meetings, with the salesmen having an active participation. The implementation and testing of the system took place during the summer and was completed in September 2001. The platform was based on Microsoft technologies, utilizing SQL Server 2000 for the database and the data-loading processes through Data Transformation Services (DTS) and Active Server Pages (asp) for the Web front. Because of the intense information exchange between the Internet platform and the retailer’s information system (e.g., a daily POS data file for all the stores is more than 10 megabytes), back-end integration was necessary to allow for the automatic exchange and import/export of data files. The same applied to the relationship between the Internet platform and the suppliers’ information systems for the exchange of the product catalogue, as well as orders and dispatch advice for direct-store-delivery. In summary, the way the system worked during the pilot phase was as follows: • •
The system ran centrally on the platform of the service provider. The platform was back-end integrated, via the exchange of text files over ftp with the retailer’s systems. This communication channel was used in order to send the orders from the platform to the retailer’s central warehouse system and in order to receive the following data from the retailer’s central information system: • Daily POS sales data from the stores • The product assortment of each store • The mapping between the product EAN code (consumer unit barcode) with the retailer’s internal product code (retailer SKU) • The promotion activities running in the stores At this stage, no information regarding the stock of the products in the stores was available. The platform received the product catalogue from the three suppliers’ systems either in the form of an EDI message (for Suppliers B and C) or ASCII file (for Supplier A). Additional information regarding the products not contained in the EDI message was maintained by the suppliers over the Web, such as the association of promotional products to their non-promotional counterparts (i.e., to mother product codes).
New Forms of Collaboration & Information Sharing in Grocery Retailing
The platform sent order files in ASCII format to Supplier A over ftp which were automatically imported to the ERP system. The update of the sales data as well as other information took place during the night, so in the morning the user could see the sales in the respective store until the night before. The product EAN code (i.e., the unique number identifying the consumer unit of each product, which is assigned by the product manufacturer), was used for data alignment. This means that the EAN code was the bridge between the retailer’s internal information system and the suppliers’ information systems.
Before the start of the pilot, a lot of effort had to be invested in ensuring that the right products were loaded and appeared to each user. This was quite a difficult task, as the data from the retailer’s and the suppliers’ information systems had to be aligned. For example, there were centralized products that were not active in the supplier’s current product catalogue, but still existed in the stores and the retailer’s central warehouse. These products had to appear in the store’s assortment so that they could be ordered. Other products were maintained with the wrong EAN code or supplier code in the retailer’s system and thus could not match with the supplier’s product catalogue. Thus, all the products had to be checked one by one to ensure that the information that appeared to the users was complete and accurate. This was much more difficult to ensure on an on-going basis through the automatic data loading procedures that had to be enhanced with several data-validation rules. The pilot went live on October 1, 2001 and ran in the five pilot stores for six weeks, from the first week of October to the second week of November 2001. At this stage the five pilot stores used the Internet-based collaboration platform for their collaboration with the direct-delivery Supplier A and for ordering the products of the two centralized suppliers, Supplier B and C, to the retailer’s central warehouse. For the rest of the centralized products they followed the traditional process. During the first two weeks, several data-validation issues, such as products that were missing from the platform and thus could not be ordered while they should, frustrated the users, especially since this problem was leading to out-of-shelf situations. However, after the third week, the data-loading processes had become more robust and these issues had been minimized. In the following section we discuss the results from the pilot phase and the issues that remained thereafter.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
usability and expandability of the solution. As was coming out from the users’ feedback, while they could see the value behind exploiting the daily POS data and collaborating for supporting the store ordering process, they considered that there was still major room for improvement as far as the use of the system was concerned. Their main concern related to the long times it was taking them to complete the ordering process, due to the following reasons: •
As the system was centralized and they connected to it through a dial-up Internet connection, they experienced several problems and delays connecting to the Internet. This problem was due either to the Internet Service Provider (ISP) or the telecommunications infrastructure in the store or in the area. When connected, they also experienced delays in the system response. As the system was not running locally but was accessed over the Internet, these delays were mainly associated with the transfer rate of their Internet connection. In many cases the modems used were very slow (less than 19Kbps) so they had to be upgraded to 52Kbps. Another delay in the system response, as perceived by the users, was that the products in the system were not presented all at once but in pages. This fact combined with the low transfer rate meant that a user spent a lot of his/her time just waiting for the pages to download, which was not productive. As the stock information was not available in the system, they had to review the products on the shelf and in the store’s back-room, in order to check the available stock. This fact resulted in further delays in the process, while for the suppliers’ salesmen it meant that they had to visit the store in order to place a suggested order. During the pilot phase, matching the stock-count to the information on the screen was also a useless and time-consuming task just because the order of the products on the printout did not match with the order the products appeared on the screen. In addition, the system was not optimized at that time for printing and the attempt
Figure 6: Comparing the OOS Rate for Centralized Products Before & at the End of the Pilot Out-of-she lf rate - Centralise d Products 12,0%
New Forms of Collaboration & Information Sharing in Grocery Retailing
to print a list of products could take quite long, especially as old-technology printers were used. All these issues resulted in time delays, both in the use of the system and the new ordering process, which varied from store-to-store depending on the size of the store. The smaller stores, which had a smaller number of products in the assortment and didn’t have to check the stock, were experiencing smaller delays with the process than the larger stores. However, all the people in the stores involved were frustrated about these time delays, leaving aside their insecurity that products might be missing from the system. This insecurity was translated into anger in the two cases where the back-end integration between the Internet platform and the retailer’s information system didn’t work properly, resulting in the orders not being sent at all! Apart from these usability and technical issues, the retailer had to face one other major concern regarding the expandability of the PCSO concept. During the pilot, the people in the store had to deal with four different ordering processes: (1) (2) (3) (4)
the traditional one for direct-store-delivery products, in which they had to review and confirm the orders prepared by the supplier salesmen; the traditional one for centralized products, in which they had to prepare the order for all centralized products (from 4.000 to 6.000 products to be reviewed each time) and send it to the retailer’s central warehouse; the PCSO process for direct-store-delivery Supplier A, in which they had to review the salesman’s order proposal in the computer and confirm it there; and the PCSO process for the centralized products of Suppliers B and C, in which they had to prepare the order themselves, using the information that was available on the Internet platform and taking into account suggestions by the salesmen if they existed.
the beginning to deal with so many products, stores, and the respective POS and assortment data. With the support of the retailer’s top management, the solutions towards overcoming the above problems and reaping the potential business benefits were sought in the following directions: •
• • •
The loading of all the centralized products on the Internet platform, to fully cover the internal store ordering process to the central warehouse. This also required the incorporation of an order proposal mechanism to deal with the large number of products to make this process usable for the store users. The development of several data-validation rules to initially clean up the data and set up robust data-loading procedures that perform several checks in order to ensure that the information is updated correctly, even in cases of failures. The redesign of the platform and upgrade of its user interface to deal with the new requirements. The development of robust mechanisms ensuring the reliable transmission of information between the retailer’s and suppliers’ information systems and the collaboration platform, which should also give notification in case of failure (e.g., by sending an e-mail or mobile SMS message in the opposite case).
There were many more technical barriers to be overcome in this effort to make the system efficient and effective in its use. Nowadays, Veropoulos is using the platform in all the 200 stores of the chain to support the internal store ordering to the central warehouse, while suppliers gradually start getting involved in the process of collaborative store ordering.
New Forms of Collaboration & Information Sharing in Grocery Retailing
Dhar, S.K., Hoch, S.J., & Kumar, N. (2001). Effective category management depends on the role of the category. Journal of Retailing, 77, 165-184. ECR Europe (2002). European CPFR Insights. ECR Europe Publications. Available at: www.ecr-europe.com Fitzsimons, G.J. (2000). Consumer response to stockouts. Journal of Consumer Research, 27(2), 249-266. Gruen, T.W., Corsten, D.S., & Bharadwaj, S. (2002). Retail Out-of-Stocks: A Worldwide examination of Extent, Causes and Consumer Responses. The Food Institute Forum (CIES, FMI, GMA). Holmstrom, J., Framling, K., Kaipia, R., & Saranen, J. (2002). Collaborative planning forecasting and replenishment: New solutions needed for mass collaboration. Supply Chain Management: An International Journal, 7(3), 136-145. Kurt Salmon Associates (1993). Efficient Consumer Response: Enhancing Consumer Value in the Grocery Industry. Washington, DC: Food Marketing Institute. Perfett, M. (1992). What is EDI? Oxford, UK: NCC Blackwell Limited. Pramatari K., Papakiriakopoulos, D., Poulymenakou, A., & Doukidis, G.I. (2002). New forms of CPFR. ECR Journal, 2(2), 38-43. Roland Berger (2002). Full-Shelf Satisfaction. Reducing out-of-stocks in the grocery channel. Grocery Manufactuers of America (GMA). Roland Berger (2003). Key Industry Trends in the Food, Grocery and Consumer Product Supply Chain. Grocery Manufacturers of America (GMA). Schary, P.B., & Christopher, M. (1979). The Anatomy of a Stock-Out. Journal of Retailing, 55(2), 59-70. Soliman, K.S., Chen, L., & Frolick, M.N. (2003). ASPs: Do they work? Information Systems Management, 20(4), 50-57. Straughn, K. (1991). The Relationship Between Stock-Outs and Brand Share. Unpublished Doctoral Dissertation, Florida State University. Verbeke, W., Farris, P., & Thurik, R. (1998). Consumer response to the preferred brand out-of-stock situation. European Journal of Marketing, 32(11/12), 1008-1028. Vuyk, C. (2003). Out-of-Stocks: A Nightmare for Retailer and Supplier. Beverage World, 122(2), 55.
APPENDIX Order Process Pros Order to the central warehouse, supported by: Hand-scanners § Fast checking of the shelves § Fast order send-out § Minimizing errors due to wrong code typing
Print-out of the assortment
Easy to follow-up on the full product assortment and not omit products from the order
Cons § § §
§ § § §
Order for direct-store-delivery products: The supplier salesman § The salesman knows well the acts as the order-taker supplier products, the marketing activities, etc. § The salesman focuses on a limited number of products, so he can pay more attention § He usually keeps track of previous product deliveries, so he gets an indication of consumer demand
Easy to omit a product from the order if not found on the shelf Difficult to follow-up on new items in the assortment Just the physical check of the shelves does not give the right image of the consumer demand Requires more time to check through the shelves Requires more time to type-in and send the order Higher possibility of errors due to code typing mistakes Just the physical check of the shelves does not give the right image of the consumer demand The salesman does not have good knowledge on store assortment and store promotion activities, especially for competitive products His motives are often achieving sales bonus instead of reducing out-of-stocks and optimizing inventory levels
The Charlotte-Mecklenburg Police Department (CMPD) is the principal local law enforcement entity for the city of Charlotte, NC, and surrounding Mecklenburg County. CMPD serves a population of nearly 700,000 citizens in an area covering more than 550 square miles, and employs nearly 2,000 people, including sworn police officers and civilian support staff. Civilian personnel are assigned to a variety of clerical and administrative support functions related to but not directly involved in the practice of law enforcement activities. CMPD is headquartered in a state-of-the-art building in the downtown area of the city. This facility was designed and constructed to support the computing and data communications needs of CMPD. CMPD is commanded by the Chief of Police with the aid of the Deputy Chief of Administrative Services, Deputy Chief of Support Services, Deputy Chief of Field Services and Deputy Chief of Investigative Services. There are many units in CMPD. Figure 2 in Appendix A contains a full organizational chart for CMPD. Technology Services, a division of Administrative Services, manages existing information systems and is responsible for the design and implementation of new IT applications. In addition, they manage strategic planning and crime analysis and provide training for all police department personnel. The operating budget for CMPD in FY2005 is approximately $146 million. Administrative Services, which includes but is not limited to Technology Services, accounted for approximately 20% of the overall budget. CMPD’s operating budget over the 3 most recent fiscal years is shown in Table 1. CMPD prides itself on being a community-oriented law enforcement agency whose mission is “to build problem-solving partnerships with our citizens to prevent the next crime” (FY2004 & FY2005 Strategic Plan, p. 57). As stated in the 2004-2005 strategic plan, “the Police Department’s problem solving efforts are predicated on the availability of accurate and timely information for use by employees and citizens” (FY2004 & FY2005 Strategic Plan, p. 57). Since 1995, CMPD has recognized that IT will be one of the most important crime fighting tools of the 21st century and has emphasized the commitment to making information one of its most important problem-solving tools. The strategic plan recognizes that IT will play an integral role in achieving the strategic goal of “making Charlotte the safest large city in America” (FY2004 & FY2005 Strategic Plan, p. 31).
Table 1: CMPD Budget Summary
Field Operations Investigative Services Special Services Administrative Services Total Police Services
would have to be pulled from the archives by the Records Department and the cases would be analyzed manually by the detective. Information needed for crime analysis, which identifies patterns that might lead to the prevention of the next crime, was not readily accessible across units. Although information technology supported the collection of data needed in daily law enforcement activities prior to the rollout of KBCOPS, it did not meet the needs of the department with respect to sharing, assimilating, and reviewing these data. It also fell short of fulfilling the Chief’s vision of IT-enhanced law enforcement. Efforts to create KBCOPS began in 1996. The development and implementation of this new system is the subject of the case described in the following section.
When a police officer responds to an incident in the field an incident report is filed. The first portion of KBCOPS implemented at CMPD — the Incident Reporting Subsystem — supports the electronic capture, storage, and retrieval of these reports. Functionality has since been added to support case management activities, arrests, investigative activities, and crime analysis. The following sections describe the features of the system in more detail as well as the required infrastructure, the process used to develop the application, and user perceptions of the system.
In addition to search capabilities, several other new features have recently been implemented. For example, the data captured in KBCOPS can be rolled up into the format required for the National Incident-Based Reporting System (NIBRS). NIBRS (NIBRS Implementation Program, 2002) is a nationwide tracking system used to solve crimes that occur across individual police department jurisdictions and across state lines. Although many local police departments have records management systems to capture data about crime incidents, they are unable to use those systems to report to NIBRS because the data are in an incompatible format, not coded in a NIBRS-compliant manner or not all of the mandatory NIBRS elements are captured. A feature that will provide a direct interface to NIBRS is currently underway. Additional enhancements are being planned. One of these will integrate KBCOPS directly with other local, state, and federal law enforcement systems, as well as hospitals, pawnshops, utility companies, and other entities that possess potentially vital information. Additionally, GIS and global positioning system components will be integrated into KBCOPS to provide street file overlays on the officer’s laptop. Finally, a Juvenile Arrest subsystem will be added in the near future. Handling crimes involving juveniles is complex because statutes and policies for dealing with juvenile offenders and victims differ significantly from those that do not. For example, fingerprints are not taken from juveniles for positive ID, making it nearly impossible to link crimes involving the same juvenile offender. The Juvenile Arrest module is scheduled for rollout in March 2004. Table 2 summarizes the currently implemented and planned components of KBCOPS.
Information Technology in the Practice of Law Enforcement 293
Table 2: Components of KBCOPS Incident Reporting System Key Functionality Create Incident Reports Approve Incident Reports Assign Case to Investigative Unit View/Track Status Add Supplement Case Management System Key Functionality View Case Summary Assign Investigator(s) to Case Re-Route Case to Another Unit Add Supplement Search Capabilities Key Functionality Search by Type/Date of Crime Search by Patrol Division Search by Method of Operations Search by Suspect/Multiple Suspects Search by Weapon/Vehicle NIBRS Reporting Key Functionality Roll-up crime data for federal reporting Juvenile Arrest System* Key Functionality
Key Features Context-sensitive intelligence Checks for errors/inconsistencies Rule-based assignment algorithms Status screens Automated version control Key Features Complete history of all versions Automated alerts for new data Supplements can be required Notification of past due supplements Key Features
Key Features Collects/edits information for NIBRS Produces error reports Formats monthly data for submission Key Features
View Case Summary Complete history of all versions Assign Investigator(s) to Case Automated alerts for new data Re-Route Case to Another Unit Supplements can be required Add Supplement Notification of past due supplements Planned Enhancements Interfaces with other law enforcement entities Interfaces with hospitals, utility companies, pawnshops, and so forth Interface with GIS components to provide street overlays
* Scheduled to roll out in March 2004 packets of digital data over unused cellular voice channels at a rate of 19.2 Kbps” (Tomsho et al., 2003, p. 599). Although these towers are shared with cellular phone service providers, the frequencies over which CMPD transmits data do not compete with those used by cellular phone customers.
Table 3: Systems Development Life Cycle Phases SDLC Phase Planning Analysis Design Implementation Support
Purpose Assess project feasibility; establish budget, schedule, and project team Study the business needs; determine system requirements; develop system models Create a detailed system specification (interface, database, program and network designs) Build the system, test the system, and place it into operation Maintain and enhance the system through its useful life
Information Technology in the Practice of Law Enforcement 295
Table 4: Before & After Comparison of Processing of a Typical Case Event Incident reported Preliminary investigation Incident report Approval of report by supervisor
Report to Records Department Assign to investigative unit
Investigation of case
Before KBCOPS • Officer dispatched to scene • Officer interviews witnesses and records information in paper notebook • Officer files a paper report after returning to headquarters • Officer submits paper copy of completed report to supervisor • Report may be returned due to errors • Report revised and submitted (possibly to different supervisor) • Supervisor may not be aware of previous supervisor’s comments • Paper report sent to Records Department to be entered into database and archived • Records Dept. sends a paper copy of the report to investigative unit • Supervisor of investigative unit assigns it to a detective • Often takes 4-5 days from reporting of an incident to assignment of a detective • Detective updates paper case file • Only those with access to paper file see updates • Cases with similar characteristics pulled and analyzed manually
After KBCOPS • Officer dispatched to scene •
Officer interviews witnesses and records information in paper notebook
Officer files a report on-line while in the patrol car
Officer submits the report wirelessly The system alerts the supervisor of a new report Report may be rejected due to errors Each supervisor’s comments are saved by the system as part of the report
• • •
• • •
• • • •
Report does not go to Records Department but is automatically stored in the database System alerts investigative unit to the report Supervisor assigns a detective to the case electronically Often takes 24 hours or less from reporting of an incident to assignment of a detective Detective updates case electronically All versions maintained System alerts officers involved to updates Cases with similar characteristics analyzed using search capabilities
detectives. The interview questions were drawn from the technology acceptance model (Davis, 1989) and the information systems implementation literature (Burns, Turnipseed & Riggs, 1991). The interview protocol can be found in Appendix C. Example comments from each group are provided.
Event CMPD established after merger of city and county police departments CMPD created an IS master plan Efforts to create KBCOPS began New director of IT and experienced technical project manager hired Chief of Police who initiated project retired Initial coding for incident reporting subsystem completed System validation testing on incident reporting subsystem Incident reporting subsystem goes live Case management subsystem goes live Compression software installed Search capabilities added to system New director of IT hired Juvenile arrest module projected to go live
“Newer officers do not seem to have a problem with the system. Older officers still have some resistance.” “Investigation has improved. It used to take 4 or 5 days to assign a case to an investigator. Now it takes less than 24 hours. Also, being able to do searches is a big timesaver. We can identify patterns and trends. Our case clearance rate has improved greatly.” “There is a big learning curve. Officers try to take shortcuts to get through the system. The reason the officers take so many shortcuts is there are so many screens to go through. Narratives aren’t being done as well as they were before. Quality of data is still one of the biggest problems.”
Patrol Officers’ Comments: “The availability of information is a big plus. The ability to do searches transformed the system from one that seemed worthless to one you can use. Once you see how the information you enter is used, you understand why they need it. Seeing the big picture really makes a difference.” “We were trained on how to use the system, but we didn’t understand why we had to use it or how it would alter the investigation process.” “The time it took to enter all that data seemed futile before. Now I use the search capabilities every day.” “Entering information one screen at a time is a big problem. You can’t see the big picture. Some screens ask for information you did not know you had to collect.”
“Spellchecking takes too long. You can’t do intermediate saves in KBCOPS. If the system goes down while entering information, you lose the whole screen. I use Word so that I can undo, use the spellchecker and do intermediate saves.”
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
Despite the success of the project, CMPD faces ongoing challenges with respect to the KBCOPS application. These challenges stem from technology, user, budgetary issues and aligning IT with community policing objectives.
At the time this case was written bandwidth in the wireless environment was limited to 19.2K, with an effective bandwidth of 10K. Compression software was installed to improve bandwidth, reducing delays by 60%. However, officers continue to experience delays in uploading and downloading forms. The system manages approximately one million inbound mobile requests per month and supports 200-250 simultaneous users. The system has thus far proven to be highly reliable, experiencing fewer problems than the internal LAN within CMPD. However, reliability could become an issue in the future as new modules are added and the number of simultaneous users increases. Although there have been no security breaches to date, security of the wireless implementation must continuously be evaluated. Initially, security issues required almost two years to resolve. The current solution includes user authentication with two levels of encryption. User authentication is the process of identifying authorized users (Newman, 2003). The most common method is through the use of user names and passwords. Encryption prevents a person with unauthorized access to data from reading them (Newman, 2003). Two independent vendors ensure an end-to-end secure connection. The commercial wireless provider encrypts data across its channels, and an additional layer of priority encryption and compression is performed by a leading software-based security system running on CMPD servers. Maintaining security across the network will be an ongoing challenge for CMPD as new encryption standards and better authentication techniques become available. As with any IT application, the need to manage and integrate emerging technologies is an ongoing challenge. Although there has been relatively little need for maintenance or replacement of equipment, this will become a necessity in the future.
Information Technology in the Practice of Law Enforcement 299
them to spend a significant amount of time entering information that populates the incident database — a database that is subsequently used primarily by detectives. Although the implementation of search features has helped, some patrol officers still question the value added by the system. Additionally, the software has some shortcomings that frustrate users. Specific issues include the delay time for submitting forms, the inability for the officers to save a form before it is submitted, and the lack of support for spellchecking. The last two issues are particularly problematic for forms that require narratives. As a temporary solution, many officers enter their narratives in Word so that they can save their work intermittently and use the spelling and grammar features. They then copy the narrative to the required form. Although this workaround accomplishes the task, it takes extra time and leads to frustration. Another challenge is created as officers become familiar with the system and take “shortcuts” to avoid filling in extra forms. Entering certain information in one form may generate many additional forms to fill in. Additionally, officers sometimes fill in the required fields in a form and leave non-required fields blank. Consequently, the information stored is sometimes incomplete and inaccurate, compromising the quality of the data and the resulting investigation. The shortcuts and incomplete forms also lead to problems between officers who enter the information and the detectives that depend on it. Training is one of the most important parts of any change management initiative and is one of the most commonly overlooked (Dennis & Wixom, 2002). Training and a willingness to combine knowledge and skill sets across functional lines are critical success factors for implementation of large systems (Benji, Sharman, & Godla, 1999; Gauvin, 1998). Research suggests that training improves the likelihood of user satisfaction, increases organizational effectiveness, and improves employee morale (Barney, 1991; Peters & Waterman, 1982; Rosenberg, 1990; Ulrich, 1991). Although CMPD trains officers on the use of KBCOPS, training focuses on how to use the system rather than why it should be used and how it fits into the bigger picture of crime investigation.
Figure 1: Index of Crime Rates per 100,000 of Population 12000 10000 8000 6000 4000 2000 0 1996
Aligning IT with Community Policing Objectives
Through the development and implementation of KBCOPS, CMPD has migrated from using IT in a reactive manner to employing IT in an active role for sharing knowledge, facilitating collaboration, and promoting a problem-solving analytical framework. The ultimate goal of KBCOPS is to improve the quality of policing. Although a causal relationship cannot be shown, crime rates decreased steadily between 1996 and 2002, as shown in Figure 1. CMPD recognizes that it will be difficult to continue to reduce crime. Police will have to expand the number and scope of partnerships to solve new problems. CMPD must identify new ways in which KBCOPS and IT in general can support strategic initiatives and continue to improve the quality of policing.
Information Technology in the Practice of Law Enforcement 301
Jiang, J., Muhanna, W., & Klein, G. (2000). User resistance and strategies for promoting acceptance across system types. Information & Management, 37(1), 25. McConnell, S. (1996). Rapid Development. Redmond, WA: Microsoft Press. National Incident-Based Reporting System (NIBRS) Implementation Program. (2002, December 26). Retrieved November 25, 2003: http://www.ojp.usdoj.gov/bjs/ nibrs.htm Newman, R.C. (2003). Enterprise Security. Upper Saddle River, NJ: Prentice Hall. Peters, T.J., & Waterman, R. (1982). In Search of Excellence: Lessons from America’s Best-run Companies. New York: Harper & Row. Rosenberg, M.J. (1990). Performance technology: Working the system. Training and Development, 44(1), 43-48. Smith, D. (1996). Taking Charge of Change. Reading, MA: Addison-Wesley. Tomsho, G., Tittel, E., & Johnson, D. (2003). Guide to Networking Essentials (3rd ed.). Boston, MA: Course Technology. Ulrich, D. (1991). Using human resources for competitive advantage. In I. Kilmann (Ed.), Making Organizations Competitive. San Francisco, CA: Jossey-Bass. Varshney, U., Vetter, R., & Kalakota, R. (2000). Mobile commerce: A new frontier. Computer, 33(10), 32-39.
Information Technology in the Practice of Law Enforcement 307
Figure 9: Suspect MO Search — Illustrates a Search for a Black Male, age 30-40, with Dreadlocks & Gold Teeth who Committed a Robbery
APPENDIX C Multiple visits were made to CMPD to interview project participants. The first round of interviews was conducted in February 2001 during initial system rollout. The second round was conducted in November 2003. Participants in each round were purposively chosen to span diverse areas of functional and technical expertise. Questions in the first round were directed primarily to developers. Questions were based on the Varshney, Vetter and Kalakota (2000) mobile commerce framework and focused on identifying and understanding: (1) development methodologies, (2) infrastructure, (3) interface of mobile and land–based technologies, and (4) functionality of the application. Questions in the second round focused on understanding implementation issues and user acceptance of the system. The following questions guided the second round of interviews:
1. 2. 3. 4. 5.
At the time of our last visit, the Incident Reporting System was being rolled out. What other modules are now in place? What kind of roll out approach have you used? What organizational difficulties have you encountered in implementing new modules? In general, what is the level of acceptance of the system? What are the “before” and “after” views of the users (police officers)? To what extent have you integrated KBCOPS with external systems (hospitals, emergency services, federal and state law enforcement agencies, etc.)?
What technical difficulties have you encountered as the system has grown? How do you train officers to use the system? How do you support users in the field? In what ways has the quality of policing improved since the implementation of KBCOPS? Are other police departments following your lead and adopting similar systems?
To improve reliability, all interviews were conducted with two researchers present, each taking notes independently. These notes were later compared and synthesized to arrive at a clear and consistent interpretation of the verbal data.
Lessons from a Case Study of IT Usage in an Engineering Organization Murray E. Jennex, San Diego State University, USA
How much end-user computing is too much? Should end users develop systems? This case looks at a study of end user computing within the engineering organizations of an electric utility undergoing deregulation. The case was initiated when management perceived that too much engineering time was spent doing IS functions. The case found that there was significant effort being expended on system development, support, and ad hoc use. Reviews of a few key systems illustrate quality problems found with the enduser developed systems. Several issues were identified affecting system development including use of programming standards, documentation, infrastructure integration, and system support. Additionally, the issues of obsolescence, security, and procurement are discussed.
This case looks at end user computing (EUC) in an engineering organization. End users are non-IS professionals who use computers and EUC are those computer activities end users perform (Edberg & Bowman, 1996). Alavi and Weiss (1986) describe EUC as a rapidly growing and irreversible trend. But how much EUC should organizations allow, what kinds of activities should end users do, and how should organizations manage EUC? The subject engineering organization is part of a large, United States based, investor owned utility. The utility is over 100 years old, has a service area of over 50,000 square miles, provides electricity to over 11 million people via 4.3 million residential and business accounts, and had operating revenues of approximately $8.7 billion in 2002. Utility net revenue has fluctuated wildly the last few years with a $2.1 billion loss in 2000, $2.4 billion in earnings in 2001 (primarily due to one-time benefits from restructuring and other initiatives), and decreasing to $1.2 billion in earnings in 2002. To service its customers, the utility operates a transmission and distribution system and several large electrical generation plants and is organized into three main line divisions: Transmission and Distribution; Power Generation; and Customer Service. Divisions such as Human Resources, Security, and Information Technology (IT) support the line divisions. The utility has approximately 12,500 employees. The power generation division is organized into operating units dedicated to supporting specific power generation sites. Each operating unit has line organizations such as Operations, Maintenance, Engineering, and Chemistry/Health Physics. Power generation operating units are supported by dedicated units from the corporate support divisions (security, human resources, IT). The engineering organization used for this case study is part of the nuclear operating unit of the power generation division and is located at the largest electrical generation site operated by the utility. IT support is provided to this operating unit by Nuclear Information Systems (NIS) which administratively is part of the corporate IT division and which operationally reports to both corporate IT and the nuclear unit of the power generation division. NIS supported engineering through its Engineering Support Systems group. This group consisted of a supervisor, two project manager/analysts, and two developers. This group was tasked with the maintenance of the 11 systems under NIS control. New systems or enhancements to existing systems were done at the instigation of engineering. Engineering through a charge-back process paid costs associated with these projects and developers were hired as needed to support the work. At the time of the study the engineering organization consisted of approximately 460 engineers disbursed among several different engineering groups reporting to the Station Technical, Nuclear Design Organization, Nuclear Oversight, and Procurement management structures. Industry restructuring was causing large drops in revenues that was driving the nuclear unit to reorganize engineering into a single organization consisting of 330 engineers under the management of the Nuclear Design Organization.
The assessment found a significant but poorly managed investment in IT in terms of money, time, and expertise. With respect to the management of IT, it was observed that NIS is tasked with managing the infrastructure, networks, and enterprise level systems. This provides an overall organizational perspective and strategy for managing these assets. Engineering IT is managed at the division level and was found to lack an overall engineering strategy for the use, adaptation, and implementation of IT. Additionally, IT was unevenly applied throughout the engineering organizations. Some groups were fully automated; others had process steps automated but not the overall process; and still others were not automated at all. The net effect was that IT assets were not performing as effectively as they could and many engineers were expending more time and resources than they should to obtain the information and data they needed. Specifics on these findings are provided in the following paragraphs.
The inventory recorded 267 systems and other hardware. This number excludes enterprise work process systems, basic personal productivity systems (MSOffice, WordPerfect, Access, etc.), and plant control systems. Included are the analysis tools, graphics packages, scheduling tools, equipment databases, image and web editing and authoring tools, and data collection tools used by engineers. The team was confident this number reflected at least 90% of what was in use. The investment in terms of dollars and effort was not totally determined, not all numbers were known and not all groups were willing or able to report all costs. However, with about 30% of the inventoried systems reporting this data it was found that approximately $1,650,000 had been spent to purchase these systems with an additional five-person years (during the last two years) expended on development. Additionally, $290,000.00 was spent annually on license or maintenance fees and 10 full time equivalent engineers (FTEs) were expended maintaining these systems. Finally, an additional approximate 10 FTEs were expended assisting other engineers in the use of these tools. For political reasons, there were significant exclusions from these figures including 45 FTEs and $335,000 in annual licensing cost supporting plant control IT. The team was confident that purchase and support costs and efforts would at least double if all the information was available. For perspective, these numbers were not expected and were considered by management to be extremely excessive although Panko (1988) found in the 1980s that 25-40% of IT expenditures were in EUC and Benjamin (1982) expected EUC to account for 75% of the IT budget by 1990 where EUC is the adoption and use of IT by personnel outside of the IS organization to develop systems to support organizational tasks.
ously that at least 20 FTEs were expended supporting other engineers, learning to use IT, and maintaining IT and approximately five person/years were expended (over the last two years) supporting system development. Doubling these values (per the team’s estimate) gives 45 FTEs/year for items one and three. The second item was found to take approximately 5% of each engineer’s time. Taken as a whole, this is a fairly extensive activity, approximately 21 FTEs yearly. Combining these efforts and excluding assets dedicated to plant IT support (50 FTEs), approximately 66 FTEs/year (16%) are spent on end user IT functions, see Table 1 for a summary of these resources (Note that the 16% figure does not include time spent using enterprise work process systems or standard office personal productivity systems used for doing routine work activities.) This was considered excessive, and if eliminated, could almost provide the necessary manpower reduction. Rockart and Flannery (1983) found that 85% of EUC was focused on report generation, inquiry, and analysis. The assessment did not find that level of reporting, instead finding a little over 50%, however, even at this level, the ability to do ad hoc reporting was considered a tremendous strength and the team did not see the need for ad hoc reporting decreasing. However, there were several issues that caused the time needed for this activity to be greater than it needed to be. Chief among these are a lack of standard query/reporting tools, advanced training in the use of the available tools, a central repository for queries with the result that many queries were written over and over, and integration of the site databases resulting in more complex and time consuming query/report generation. Interviews recorded numerous complaints of end users not knowing where data was located. Engineers that spent significant time assisting in ad hoc reports and queries stated that their time was taken in assisting with SQL and finding out where data was kept. Additionally, there is no process for tracking end user reports to determine if they are used in sufficient quantity to warrant inclusion in the enterprise system. The team did not consider this very important but from the interviews it appeared that there were several organizations doing the same or similar reports. Discussions with NIS and end user managers found no awareness of what reports and queries were being run although both groups expressed interest in making repeatedly run reports and queries part of the formal system. This leads to the key issue of NIS focusing on the enterprise level and allowing end users to go their own way. This case is an example of more effort than necessary being expended on ad hoc reporting because the enterprise database structure was not available to the end users and no effort is being made to monitor end user usage for common reports and queries. Dodson (1995) found this to be a common problem when IS focuses solely on the organizational systems. What makes this issue more significant is the ability to generalize the average of 5% time spent on ad hoc reports to other organizations. This was considered excessive by the engineering Table 1: Summary of Engineering Time Spent Supporting IT Item Support, Maintenance, Learning to Use IT Ad Hoc Report Support System Development Total
organization’s management and would probably be considered excessive in many organizations once it was quantified. Perhaps the most interesting observation during the study was the generally held opinion that the ability to do ad hoc reports was a great strength. While this is an indication of system flexibility and end user ability, it did not occur to anyone that large amounts of ad hoc reports and queries could also be a negative indicator. To address this, the organization is considering publishing a data road map. Grindley (1995) predicted that, by 1998, 80% of all system development would be done by end users or their consultants and while this case did not find that high of a percentage of system development being done by engineering, it did find that the ability of engineers to develop new systems for addressing specific engineering problems was considered a strength and a need by the engineering organizations. The team agreed that this function would continue to require engineering involvement. However, this is the function least understood by engineering with respect to cost and process. Engineers followed minimal processes and considered the Capability Maturity Model (CMM) processes followed by IS to be a waste of time and money (NIS is a CMM Level 2 shop). The engineers justified the need for engineering to provide its own IT support through several reasons that could be combined into primarily three issues. The first is that engineering systems are generally not supported by IS so expertise to assist engineers with these systems only exists in engineering. The second is that due to lack of standardization there are multiple products supporting the same function, this makes having central support prohibitively expensive. Experts would be needed for over 200 systems and devices that in many cases are only used by a few people. The third was an overall poor relationship between engineers and IS.
documents that must be produced for all system development projects. The generation and promulgation of standard document templates that can be tailored to the size and complexity of the project facilitate this. Finally, end users need to be trained to use these tools and processes to produce systems. The two examples were of large systems, but it should be noted that these issues also apply to smaller, personal productivity systems. Miric (1999) warned that the lack of programming standards and planning leads to large numbers of errors in end user created spreadsheets. KPMG Management Consulting studied end user created spreadsheets in their client organizations and found that 95% of the spreadsheets utilized models with significant errors and 59% of the spreadsheets had poor model design (Miric, 1999). To prevent these errors Miric (1999) suggests that spreadsheet development should be treated no differently than system development and that users need to be trained to use organizational programming standards, determine and document system requirements, perform testing, and use automated tools when available. The literature also suggests that end user groups such as engineering will be more problematic with respect to end-user developed systems. Munkvold (2002) found that high computer skill self-efficacy within end users coupled with a low regard for IS leads to end-user system/system development. Wagner (2000) investigated the use of end users as expert system developers and found that end users have significant domain knowledge. However, it was also found that end users had difficulty knowing and expressing what they know, making their contribution limited in content, quality, size, and scalability. Taylor, Moynihan, and Wood-Harper (1998) agreed that end users do not produce good systems and identified duplication of effort, low quality, and lack of training in system development methodology as issues. Note that low quality is reflected as a lack of documentation, standard development practices, and/or programming standards. Additionally, McBride (2002) found that imposing system development methodology on end users might be regarded as an attempt to impose IT culture and thus be rejected by the end users. Finally, Adelakun and Jennex (2002) found that end user development issues with respect to failing to meet requirements or failure to gather the appropriate requirements could be caused by end users not identifying appropriate stakeholders for project involvement and assessing success of the developed systems.
Lack of Documentation
Previous discussion of the literature found many sources that identified a documentation vulnerability for end-user developed systems. Virtually all of the systems developed or purchased by engineering were found to have minimal to no design documentation. This is potentially a large problem as there is a great deal of memory/knowledge captured in these systems that is not available to system maintenance personnel. Also, there is a great deal of knowledge as to why things are done a certain way built into macros, programs, reports, databases, and models that is not captured in a retrievable manner. As engineering undergoes change, it is likely that a great deal of this knowledge will be lost since engineering’s current knowledge management practices assume a static work force and does little to capture knowledge that exists in the heads of its members. An example was a system developed to model the fire protection system. The system is used to evaluate potential work activities to determine impact on the fire protection
system and to determine what compensatory measures need to be taken to ensure the fire protection system will still function when portions of it are taken out of service. The system was designed, built, maintained, and supported by the fire protection engineer. No documentation was generated. The concern is what happens if this engineer leaves, as a replacement has nothing to learn from. The organization has grown to rely on this system, and its loss would severely impact the organization. Another example was a local leak rate system that was found abandoned. This system had been developed to automatically calculate penetration leakages and to determine the plant’s overall local leak rate in accordance with federal regulations. When the engineer who built the system left, the incoming engineer had no documentation to teach him how to use or maintain the system so he abandoned it and performed the needed functions using hand calculations where the potential for error is quite high. These were not isolated cases. Numerous examples were found of special reports, databases, spreadsheets, and systems that were built to satisfy specific needs but are not documented. All rely on the engineer using them to maintain and enhance them and would be lost should the engineer leave and the report, database, spreadsheet, or system have a failure or need to be modified. What makes these issues significant to this and all organizations is that it has the potential to lead to inaccurate data and incorrect decision-making. As processes change, the systems supporting the processes must be modified. Without documentation or system models to guide developers as to why the system is the way it is, it is easy for the developer to make wrong assumptions that can result in the incorrect modification of key calculations or algorithms. This can result in the system providing inaccurate data and results. This is of particular concern for this case as the subject organization operated a nuclear site and was subject to a great deal of regulatory required reports and data whose inaccurate generation could result in the site being shutdown.
Obsolescence and procurement standards have been recognized as issues for IS planning. However, quite unexpectedly, the team found many plant and engineering digital systems that were approaching their end of life and needed to be replaced or updated. The team found systems running on Windows 3.1 and DOS as well as using 8” and 5 1/4” floppy drive technology. Expertise and hardware for maintaining these systems is disappearing. Problems arise as replacements are investigated for these systems and as new equipment/software is purchased for resolving new problems. The NIS infrastructure was standardized on proven technology and was not leading edge. Products bought on the open market tend to be leading or even bleeding edge. This results in some new products not being able to function within the NIS environment and requiring engineers to purchase equipment of an older standard. However, it is not good practice to develop replacement systems for this older infrastructure; instead, developers need to anticipate where the infrastructure is going and design for that. The issue is that NIS needs to create a process for assessing the incorporation of leading edge solutions needed by engineering into the NIS infrastructure while maintaining the reliability and coherence of the infrastructure. Additionally, procurement standards and processes need to be created for engineering to use and follow in the procurement of
replacement systems and components. Another side of this issue is lack of documentation for these systems makes selecting and purchasing, or developing, replacement systems difficult as requirements are not documented and available for use in specifying the needs for the replacement systems. Another example where a lack of procurement standards affects the infrastructure is the rapidly growing use of digital cameras and digital images. Their use has had a very positive impact on productivity. However, due to a lack of procurement standards many varieties of equipment, software, and formats were obtained and implemented. Extra resources were needed to support these multiple versions of equipment and software reducing much of the productivity gains. Additionally, clogged networks (caused by widespread e-mailing of images), dealing with different formats, and incorporating images into processes not designed to handle them has reduced these gains even further. A final example is the use of web design and management tools. No procurement standards governing the purchase of web tools exist, and organizations have purchased whatever they have wanted making it difficult for NIS to support the use of the tools or to maintain sites created by non-IS endorsed tools. Additionally, use of intranet-based systems has failed to radically improve productivity as a lack of standard design practices and interfaces have resulted in many sites and systems being created with marginal usability and/or purpose.
The team observed that the demarcation between the business systems maintained by NIS and plant systems maintained by engineering was blurring. Plant information flows across the business network on a routine basis. Plant processes have been developed that rely on e-mail and the business network to transmit data. Plant support productivity has improved by using the business networks to access and maintain plant systems. The key issue is to recognize that the boundary for protecting plant information now extends to the intranet firewalls. NIS and site management need to work together to create a security plan that recognizes this reality and allows for the creation of standards and processes for ensuring that systems developed by end users support the security plan. An example of this issue was the use by an engineering group of the business network to access plant equipment from remote locations such as their homes. This greatly increased productivity and reduced overtime costs but failed to take into account security needs. When interviewed and asked about security processes for ensuring proper access and user authorization, the group’s manager stated that business network login procedures were all that was necessary as he trusted his people to properly access and use the remote access process to modify plant equipment when needed. Another example was found in end-user system development. While troubleshooting the previously discussed flow tracking system, the author was able to connect to a plant database that had been given to the vendor for testing purposes. The database had been placed on a publicly accessible server that the author was able to access using America Online, raising the issue of possible inadvertent disclosure of restricted data by end users and/ or their vendors doing system development.
CURRENT CHALLENGES/PROBLEMS FACING THE ORGANIZATION
The greatest challenge faced by the organization is in learning to managing EUC within its management of traditional IS. McLean and Kappelman (1992) found that EUC has become an extension of corporate computing and suggest EUC be managed as a shared partnership of responsibility and authority. This organization has a schism between NIS and engineering that has resulted in engineering avoiding NIS control and viewing any attempt by NIS to form a partnership with suspicion. Munkvold (2002) found that this is most likely to occur with a group that has computer expertise and a low regard for IS, as the assessment found to be the case with engineering. McBride (2002) predicts this schism will be a hard issue to resolve as any attempt by NIS to enforce conformance with NIS standards will be perceived by the engineers as an attempt to impose IS culture and process on engineering. However, research suggests this must be done and provides suggestions such as Rittenberg and Senn (1993) who recognize that many users do not appreciate the risk involved in end user development and that this knowledge resides in IS. They suggest policies be implemented to govern EUC that include standards for procurement, documentation, and testing. Rittenberg and Senn (1993) also state that while user groups are suggested as a means of partnering IS and end users, they have found them to be ineffective unless there is strong leadership, a willingness to partner, and allocated resources to support the user groups within the IS and end user organizations. Jenne (1996) also supports creating a policy for managing end user development and suggests that it would be more effective if end user developed systems were grouped into various risk categories with IS development standards applied based on the risk category. Amoroso (1988) supports the use of end user policies to manage end users but suggests control must remain with the end user organization and not IS. Cheney, Mann, and Amoroso (1986) found that corporate policies controlling EUC were necessary to increase the likelihood of EUC success.
Beheshti, H.M., & Bures, A.L. (2000). Information Technology’s Critical Role in Corporate Downsizing. Industrial Management and Data Systems, 100(1), 31-35. Benjamin, R.I. (1982). Information technology in the 1990s: A long range planning scenario. MIS Quarterly, 6(2), 11-31. Cale Jr., E.G. (1994). Quality issues for end-user developed software. Journal of Systems Management, 45(1), 36-39. Cheney, P.H., Mann, R.I., & Amoroso, D.L. (1986). Organizational factors affecting the success of end-user computing. Journal of Management Information Systems, 3(1), 65-80. Davis, G.B. (1982). Caution: User Developed Systems Can Be Dangerous to Your Organizations. MISRC Working Paper #82-04, University of Minnesota. Dodd, J.L., & Carr, H.H. (1994). Systems development led by end-users. Journal of Systems Management, 45(8), 34-44. Dodson, W. (1995). Harnessing End-user Computing within the Enterprise. Online: http:/ /www.theic.com/dodson.html Edberg, D. T., & Bowman, B. J. (1996). User-developed applications: An empirical study of application quality and developer productivity. Journal of Management Information Systems, 13(1),167-185. Grindley, K. (1995). Managing IT at Board Level. FT Pitman Publishing. Jenne, S.E. (1996). Audits of end-user computing. The Internal Auditor, 53(6), 30-34. Jennex, M., Franz, P., Duong, M., Haverkamp, R., Beveridge, R., Barney, D., Redmond, J., Pentecost, L., Gisi, J., Walderhaug, J., Sieg, R., & Chang, R. (2000). Project report: Assessment of IT usage in the engineering organizations. Unpublished Corporate Report. McBride, N. (2002). Towards user-oriented control of end-user computing in large organizations. Journal of End User Computing, 14(1), 33-41. McGill, T. (2002). User-developed applications: Can users assess quality? Journal of End User Computing, 14(3), 1-15. McLean, E.R. (1979). End users as application developers. MIS Quarterly, 3(4), 37-46. McLean, E.R., & Kappelman, L.A. (1992). The convergence of organizational and enduser computing. Journal of Management Information Systems, 9(3), 145-155. Miric, A. (1999). The Hidden Risks of Spreadsheets and End User Computing. KPMG Virtual Library, http://www.itweb.co.za/office/kpmg/9908100916.htm Munkvold, R. (2002). End user computing support choices. Proceedings of the 2002 Information Resource Management Association Global Conference (pp. 531535). Hershey, PA: Idea Group Publishing. O’Donnell, D.J. & March, S.T. (1987). End-User Computing Environments: Finding a Balance Between Productivity and Control. Information and Management, 13(2), 77-84. Palvia, P. (1991). On end-user computing productivity: Results of controlled experiments. Information and Management, 21(4), 217-224. Panko, R. (1988). End User Computing: Management, Applications, and Technology. John Wiley & Sons. Rittenberg, L.E., & Senn, A. (1993). End-user computing. The Internal Auditor, 50(1), 35-42.
Rockart, J., & Flannery, L. (1983). The management of end-user computing. Communications of the ACM, 26(10), 776-784. Sumner, M., & Klepper, R. (1987). Information systems strategy and end-user application development. Data Base, 18(4), 18-30. Taylor, M.J., Moynihan, E.Y., & Wood-Harper, A.T. (1988). End-user computing and information system methodologies. Information Systems Journal, 8, 85-96. Wagner, C. (2000). End users as expert system developers? Journal of End User Computing, 12(3), 3-13. Wetherbe, J.C., Vitalari, N.P., & Milner, A. (1994). Key trends in systems development in Europe and North America. Journal of Global Information Management, 2(2), 5-18.
ORGANIZATIONAL BACKGROUND The Public Good Character of Information
Information is usually considered to be a public good (Braunstein, 1981; Spence, 1974). Certainly, that which falls outside the already established category of intellectual property is a public good, a shared resource that is enriched rather than diminished by policies that increase rather than decrease everyone’s access to it (Ebbinghouse, 2004). A pure public good is one that has two major characteristics — nonrivalous consumption and nonexcludability (West, 2000). Nonrivalous consumption means that the good or service is not diminished or depleted when consumed. If information is shared between two people, it is not diminished thereby, and both can have full use and benefit of it. Nonexcludability means that consumption of the good or service cannot be denied to people who have not paid for it. It is an important factor in controlling the spread of information by vendors wishing to market it, especially in an online environment where it can easily be downloaded and further distributed, processed or reused. Information may have different value to different people with whom it is shared. Some consumers may be prepared to pay significantly more for a specific piece of information than others. And, of course, the time it is delivered can be an important factor in determining the value of information for a particular consumer. The manner in which information is provided plays a significant role in determining its characteristics as a public good. Printed information in book or journal form has physical characteristics that enable it to be priced and marketed as an artifact irrespective of the informational value of the contents to different consumers. The implications of this are that, because it is difficult for a vendor of information to be reimbursed for the development and provision of the goods or services, or to control subsequent distribution, there is a reduced incentive for investing in the creation of the good. So, while there may be a demand for the information, no seller will offer it. Sometimes, public good providers create modified or less-efficient markets to generate the revenue that pays for the public good. Advertising revenue can be used to pay for public TV, Internet portals, search engines and other information products (West, 2000).
The purchase of a book or journal requires the generation of data that is used not only for the order process, but forms the basis for subsequent entries. That data is structured according to author, title, publisher and other components, all of which might be required as the sort key in the generation of a report or for database search purposes. Collaborative data entry into a centralized networked database enables libraries to speed their order processes, make their catalogue entries more accurate and make more informed decisions about purchases, depending on the holdings of other libraries in the region. The same networked database could also form the central resource for cooperative catalogue development and the establishment of a regional or national union catalogue of library holdings, facilitating interlibrary loans and promoting networked communications among participant libraries. The development of a centralized, networked database of the catalogue holdings of the major libraries in a region or in a country would provide operational efficiencies for participant organizations in the same way that banks, insurance companies and motor manufacturers promoted efficiencies through collaborative database development. Many of the problems associated with collaborative database development were the same — especially those relating to systems integration, data and content ownership, catalogue content management and the determination of tariffs for database use combined with incentives for participants to contribute their data. (Rayport & Jaworski, 2002, pp. 366-372). The reason for this is that, in collaborative databases, several information producers combine to generate the data for the online server/vendor to supply to the information consumer. These three players – producers, vendors and consumers — in the online information services market can have conflicting interests, especially when the information consumers are also the information providers and they establish the server/vendor for that purpose.
Nevertheless, the competitive aspects of IOSs also need consideration. IOSs affect competition in three ways: they change the structure of the industry and alter the rules of competition, they create competitive advantages, and can create whole new businesses (Siau, 2003). Furthermore, academic administrators and politicians have turned to this form of collaboration as a means of accomplishing their goals (Siau, 2003), which are very often competitive. The origins of many of today’s e-commerce business models can be found in those business innovations that recognized the value of networked computers for obtaining greater efficiencies and promoting collaborative contributions. Many of the problems and issues that were found to have been intractable then remain problems in contemporary businesses. Solutions found to these problems in earlier models can often lead to insights and solutions in today’s businesses.
in the Memorandum of Agreement. There was some tentative enthusiasm about the direction in which library cooperation was going with the establishment of the fledgling company. Besides the State Library, the original members included universities, national and regional government departments, and municipalities that owned and ran the libraries intended to benefit from the collaborative venture. The subscribing libraries hoped that their expenditure on subscriptions and purchases would be offset by reduced numbers of acquisition and cataloguing staff, more efficient interlibrary loans transactions (with associated staff efficiencies) and more efficient information retrieval by consultant or reference staff. In addition it was hoped that the pressure on libraries to automate their management processes would be offset or delayed by participation in the LIBNET initiative. If nothing else, participation by members was a learning process that would facilitate subsequent decision making about library computerization — a way of putting their toes into the water without the pain of sudden and total immersion. And, of course, there were incentives to load their cataloguing data that would offset the cost of establishing their own computerized catalogues. The original subscription commitment for 10 years made the member libraries shareholders in the company with the right to elect the Board of Directors at an Annual General Meeting of LIBNET. This was usually held, for convenience sake, at the annual conference of the South African Institute for Librarianship and Information Science. However, not all the directors were subject to election at the Annual General Meeting. Because of that library’s important statutory responsibilities, the Director of the State Library was a permanent member of the Board of Directors. In addition, the interests of the government were represented on the Board of Directors by a person nominated by the Minister of National Education. The Managing Director of LIBNET was ex officio a member of the Board, while the members at the Annual General Meeting elected the rest of the directors. Largely they were drawn from the academic library sector, although an important component was the presence of senior businessmen who could bring their business and entrepreneurial acumen and expertise to bear on decisions before the Board.
SETTING THE STAGE
Internally, the staff of LIBNET consisted of two main groupings — those responsible for developing the technological base of LIBNET’s operations and providing the networked services, and those responsible for marketing the products and services as well as growing the clientele and membership numbers. This latter activity had a very positive side effect in that it made the LIBNET staff very much part of the professional library community in South Africa, and in fact, helped to create a sense of unity and camaraderie among the members. This sense of participative ownership was essential to promote a constant flow of new cataloguing data of the highest possible quality being added to the database by all the subscribing libraries. It also provided LIBNET with the means to refine the services required by the members.
switching as well as SNA and TCP/IP for dedicated data lines and point-to-point asynchronous communication with DEC VT-100 or compatible terminals. Any other data communications requirements were investigated to ensure as wide a range of access options for subscribers as possible (Laing, 1997, p. 54). From the beginning the intention was to develop its own system using South African Machine Readable Cataloguing (SAMARC) records. The SAMARC format was based on the original MARC format developed by Henrietta Avram and her team in the Library of Congress in the 1960s and 1970s. It included some modifications made in the British UKMARC format as well as some idiosyncratic requirements to suit the multilingual South African situation. A great deal of time and effort went into developing the specifications and writing the software for LIBNET’s new, home grown system until it was realized that it would not be as cost effective as originally intended. “Build or buy” decisions were very controversial at that stage, especially as the Open Systems Movement gained ground within the computer systems industry worldwide. Finally, in 1992, LIBNET rolled out their new system amidst some controversy. It was a turn-key system based on a library software package popular with university libraries in South Africa at the time. Running on a UNIX operating system, and therefore in line with current open-systems thinking, the system was accessible via X.28 dial-up facilities, via Telkom’s dedicated data lines with X.25 packet switching, or with TCP/IP communications protocol on GOVNET, UNINET and other networks available in South Africa. Databases available were originally based on the Washington Library Network database, supplemented by the Joint Catalogue of Monographs developed by the State Library from records sent to them by the major libraries in the country, particularly the main university, and public libraries and the legal deposit libraries. With the implementation of its new system in 1992, LIBNET made available to its members a new “bouquet” of databases. These included the South African Cooperative Library Database (SACD), the Index to South African Periodicals (ISAP), the Library of Congress Database (otherwise known as the National Union Catalog (NUC)), the British National Bibliography (BNB) and Whitakers Books In Print. The Union Catalogue of Theses and Dissertations developed by Potchefstroom University and the South African National Bibliography were also available on the LIBNET database.
Network for its database. If LIBNET was deriving a benefit from the contribution of libraries such as those directed by Jan and Sid, surely there could be some reconsideration of the tariff determination by LIBNET? The two-hour flight back to Cape Town after that Board meeting in Pretoria had done little to ease the minds of Jan and Sid. Who were the information providers? Who owned the database? Who was paying for what services and should their institutions not be getting some form of royalty or compensation for the added contribution they were making? In addition, how could a tariff policy be set that allowed libraries to reduce budgetary uncertainty ahead of time when the tariff policy was based on an annual subscription as well as connect-time pricing and/or per-record charges? How could all this be done without damaging LIBNET and its financial viability, or were there other strategic options available to them? The Boeing 737 wove its way through tall towers of cumulus, like huge, airborne cities. Through the late afternoon sky, the sun’s rays struck the white bulbous towers from the west, creating deep shadows and slowly redecorating the thick, puffy clouds in gold and pink as the land below melted into shadow. Not much chance of the data ownership and tariff problems melting in the same way. Although it was a privilege to be able to see the world from a perspective that few creatures in all of Earth’s history had ever seen, one way or another, that privilege would have to be paid for.
consider that they should be getting some return on their investment, some compensation for the initiative they had taken to ensure that the national online catalogue will be completed faster to everyone’s benefit. What is in it for us and them?” “That’s it. OK. Let’s look at the major issues.” Sid pulled out a document with his scribbled notes. That morning he had jotted several points down before driving over to Jan’s library. These were the principles that he thought should drive any decisions about equity in this situation. “Let’s take them one at a time as defendable issues or issues that may be of political or strategic importance to individual members of the Board.” He read them out. 1. 2. 3.
4. 5. 6.
Each library owns the records that it has generated through its own creative efforts. Each library also owns the records it has generated through paying for the data conversion process. The State Library owns all records that it has contributed to the LIBNET database, even though many of those records were contributed by participating libraries sending in copies of their catalogue cards, or by legal deposit libraries contributing their acquisitions lists to the State Library. Similarly, the Library of Congress owns the National Union Catalog that is accessible through LIBNET, and the British Library owns the British National Bibliography, also accessible through LIBNET. LIBNET owns all databases that it has developed from the contributions made to it by subscribing libraries, or for which it has paid in agreement with the owners of other bibliographies. Individual libraries have the right to decide whether or not to upload bibliographic data to LIBNET. Once uploaded, the records belong to LIBNET in terms of the subscription agreement, but they also belong to the originating library and may not be sold by LIBNET without permission or recompense. Having paid for their subscriptions and the downloading of their own bibliographic records from LIBNET, individual libraries are entitled to establish their own domestic bibliographic database, but may not sell records downloaded from LIBNET. Any library is free to sell, or otherwise make available, its own bibliographic records to any other individual or library, as it sees fit.
“So what about a regional initiative?” asked Jan. “There is the financial incentive that flows from the possibility of overseas funding by benefactors. That could make a big difference to our situation. It will make good political sense too. We will be seen to be driving academic regional cooperation where many other initiatives have failed. Also, a smaller cooperative network would be easier to handle, would give more direct administrative control and would build on what we have already done through our earlier discussions and through LIBNET.” He looked questioningly at Sid. “Well, if LIBNET is unable to come up with some new and revised tariff structure that recognizes the contribution we are making, it could make sense for us to slowly withdraw our dependence on LIBNET and build a closer dependence on a regional cooperative structure. After all, we only have a year or so to go before our legal obligations to LIBNET end. As far as I am concerned, the financial constraints being placed on us by our respective administrations are such that all our options need to be considered. I am under a lot of pressure to reduce staff, improve services, increase book stock and take a lead in collaboration and regional rationalization.” Sid looked pensive. It sounded like treason. After all the years of working on the promotion of LIBNET and of national and regional library cooperation, to suddenly turn to a narrow and self-serving strategic direction went against the grain. But he had to consider all the options that were open. Jan looked at his watch. “Hey! It’s lunchtime,” he announced with a smile. “I have something good for you!” he said, thinking of the grilled yellowtail and the pinotage. They gathered up their papers and with a sense of relief found their way out of the library and across the square to the university’s staff dining room. Based on the discussion over lunch, Jan undertook to prepare a series of proposals for the next meeting of the LIBNET Board. He asked Sid to write up a memorandum on the basis of the day’s discussions and forward it to him as soon as possible. He would then consult with his deputies before submitting his final proposals to the LIBNET Board. The range and implications of their morning discussions ensured an ongoing debate over lunch and dominated their thoughts for the rest of that afternoon. On his way home, and while the issues were still fresh in his mind, Sid dictated a memorandum for typing the next day, so that it could be sent to Jan as a file attachment with the least possible delay. Both Jan and Sid felt they had had a very productive meeting — yet one that could have enormous repercussions for both the future of LIBNET, and also for academic library cooperation. It would probably have some important implications for the principles on which database development is funded and the way that the contribution of members is rewarded.
CURRENT CHALLENGES & PROBLEMS FACING THE ORGANIZATION
The question of database ownership and the provision of incentives and rewards to the information providers were reflective of the changing needs of LIBNET members. Originally, they had required a networked union catalogue of the holdings of the nation’s major research libraries. The cost of such a facility was beyond the budget of any one
of the libraries, so they were happy to participate in and contribute to the development of this national utility. After 10 years, however, three things had happened to change the perspective of the members: 1. 2. 3.
LIBNET had successfully established the basis of that national catalogue. It had a modern networked system, a staff that nurtured high standards of contribution to the database, and simultaneously promoted the company’s services nationwide. The technology had changed, making online, real-time network access to a regional library system both affordable and desirable. The political and economic climate of the country had changed. South Africa was no longer under siege. The academic boycotts, political and economic sanctions, and civil unrest that spread to the campuses of the major LIBNET members were over. Nelson Mandela was out of gaol. The first democratic elections had been held, and around the world, aid agencies and benefactors were keen to contribute to and participate in the remarkable emergence of what Archbishop Desmond Tutu had referred to as “the Rainbow Nation.” What better contribution could be made than to democratize the nation’s information resources by networking the major libraries on a regional basis so that their collections were accessible to all the citizens, whether or not they were students or staff?
With this option before them, it was understandable that Jan and Sid and their colleagues on the LIBNET Board of Directors started giving attention to who owned the LIBNET database. Their cash-strapped universities saw many benefits flowing from the injection of funds by overseas benefactors. Not only were there savings to be made from cooperative library acquisitions policies and cataloguing activities, but a unified circulation system and a reduction in duplicate journal subscriptions would be facilitated by a regional cooperative system. LIBNET was therefore under pressure to deliver at the same time as its initial “pump-priming” government funding, and the 10-year membership commitments of its founder institutions were coming to an end. The resolution of the tariff and incentives structure became the catalyst that led on to the restructuring of the entire LIBNET business. On January 1, 1997, LIBNET sold its operations to a private company to enable it to expand into the commercial online market. Today, the new privatized LIBNET thrives as the most successful online information network in Africa. But it is very different in structure from its original conception — largely as a result of the factors driving the decisions outlined above. The main reasons for “privatizing” LIBNET operations in the mid-’90s were: • • •
to bring in strategic partners through a shareholding structure, to create a vehicle for generating development capital, and to attract and keep the right type of staff members.
incentive scheme, is making a huge contribution to its success. The business orientation of staff has increased beyond belief. Clients are benefiting through the ongoing improvement of products and services, while costs are controlled in the same process, because it is in the interest of the staff to run a profitable and competitive shop. Although there had been discussions at times with potential strategic partners to buy a stake in the company, that idea was abandoned several years ago without selling off any shares to outside investors. The traditional character of the company was therefore maintained without diluting the value of the shares. The big challenge in doing business in the new South Africa is the Black Economic Empowerment and Affirmative Action law. The general line of thinking is that all businesses should have a 25% black ownership by 2010. Government itself is more concerned about real empowerment, such as people development, skills transfer, enterprise development, corporate social upliftment, and so forth. The record shows that there are, unfortunately, a small number of black businessmen in the country who are pushing the shareholding idea without contributing anything to the growth of the economy. As a result, LIBNET is still not listed on the Johannesburg Stock Exchange (JSE) and has no immediate plans to do so. The corporate scandals of the last couple of years have made the JSE less attractive (from a cost viewpoint) to apply for a listing. LIBNET doesn’t need capital at present and shareholders are not prepared to sell their shares, so a listing does not make any sense. Accordingly, the new LIBNET intends to grow its black shareholding through the current structures of staff and client shareholders. Initially, shares were sold to staff at R0,20 per share at the time of privatization. Those shares were subdivided in the ratio of 10:1 five years later. The latest official valuation of the shares is about R0,50 to R0,60 per share. That is equal to 5 to 6 Rand per share before the subdivision. Functionally, LIBNET has developed into a company with a primary focus on the maintenance and development of its traditional services (cataloguing support, interlending and reference services). These are still growing slowly, but the real performers are the newer products, such as electronic journals and the legal products (government and provincial gazettes, parliamentary monitoring and statutes). This has helped to spread the risks far better than before. Structurally, LIBNET has divided into two separate business units (Academic/Library, Corporate) to reflect the separate focuses, and these are functioning well with totally different marketing and support approaches. The core strengths remain the same — excellent relationship with clients backed by high-quality client support, and an excellent team of people doing the job. The challenges facing the new LIBNET reflect the current economic development climate in contemporary South Africa. Apart from Black Economic Empowerment and Affirmative Action employment laws, these include doing international business with an unstable local currency, growing the markets and competing with larger international competitors.
Bensaou, M., & Venkatraman, N. (1996). Interorganizational relationships and information technology: A conceptual synthesis and a research framework. European Journal of Information Systems, 5(2), 84-91. Braunstein, Y. (1981). Information as a commodity. In R.M. Mason, & J. E. Crebs, Jr. (Eds.), Information Services: Economics, Management and Technology (pp. 9-22). Boulder, CO: Westview. Chwelos, P., Benbasat, I., & Dexter, A. S. (2001). Research report: Empirical test of an EDI adoption model. Information Systems Research, 12(3), 304-321. Communications Industry Report. (2000) 14th ed. New York: Veronis, Suhler and Associates (as cited by Jain & Kennan, op. cit. p 1123). Crook, C.W., & Kumar, R. L. (1998). Electronic data interchange: A multi-industry investigation using grounded theory. Information Management, 34, 75-89. Ebbinghouse, C. (2004). If at first you don’t succeed, Stop!: Proposed legislation to set up new intellectual property rights in facts themselves. Searcher, 12(1), 8-17. Fountain, J. E. (2001). Building the Virtual State: Information Technology and Institutional Change. Washington, DC: Brookings Institute Press. Gorman, G. E., & Cullen, R. (2000). Models and opportunities for library cooperation in the Asian region. Library management, 21(7), 373-384. Hayes, R. M. (2003). Cooperative game theoretical models for decision-making in contexts of library cooperation. Library trends, 51(3), 441-461. Holmes, D. (2001). eGov: E-business strategies for government. London: Nicholas Brierley. Hooper, A. S. C. (1989). Formal cooperative agreements among libraries: Towards a South African model. South African Journal for Librarianship and Information Science, 57(2), 125-129. Hooper, A. S. C. (1990). Overlapping journal subscriptions as a factor in university library co-operation. South African Journal for Librarianship and Information Science, 58(1), 25-32. Iacovou, C. L., Benbasat, I., & Dexter, A. S. (1995). Electronic data interchange and small organizations: Adoption and impact of technology. MIS Quarterly, 19(2), 465-485. Jain, S. & Kannan, P.K. (2002). Pricing of information products on online servers: Issues, models and analysis. Management Science, 48(9), 1123-1142. Johnston, R. B., & Gregor, S. (2000). A theory of industry-level activity for understanding the adoption of interorganizational systems. European Journal of Information Systems, 9, 24-251. Kemp, G. (1996). Networking in South Africa. Bulletin of the American Society for Information Science, 22(6), 26-27. Kopp, J. J. (1998). Library consortia and information technology: The past, the present, the promise. Information Technology and Libraries, 17(1), 7-12. Laing, R. (1997). LIBNET (A Financial Mail Special Report). Financial Mail, 144(4), 5364. Maingot, M., & Quon, T. (2001). A survey of electronic data interchange (EDI) in the top public companies in Canada. INFOR, 39(3), 317-332. Martey, A. K. (2002). Management issues in library networking: Focus on a pilot library networking project in Ghana. Library Management, 23 (4/5), 239-251.
Morrell, M., & Ezingard, J. N. (2001). Revisiting adoption factors of inter-organizational information systems in SMEs. Logistics Information Management, 15(1/2), 46-57. Musiker, R. (1986). Companion to South African Libraries. Johannesburg: Ad Donker. Patrick, R. J. (1972). Guidelines for Library Cooperation: Development of Academic Library Consortia. Santa Monica, CA: System Development Corporation. Potter, W. G. (1997). Recent trends in state wide academic library consortia. Library Trends, 45, 417-418 (as cited by Kopp, 1998, op. cit.). Premkumar, G., & Ramamurthy, K. (1995). The role of interorganizational and organizational factors on the decision mode for adoption of interorganizational systems. Decision Sciences, 26(3), 303-336. Rayport, J. E., & Jaworski, B.J. (2002). Introduction to E-Commerce. New York: McGrawHill. Siau, K. (2003). Interorganizational systems and competitive advantages: Lessons from history. The Journal of Computer Information Systems, 44(1), 33-39. Spence, A. M. (1974). An economist’s view of information. Annual Review of Information Science and Technology. American Society for Information Science. West, L.A. (2000). Private markets for public goods: Pricing strategies of online database vendors. Journal of Management Information Systems, 17(1), 59-85.
APPENDICES 1. The Companies Act, No 61 of 1973 STATUTES OF THE REPUBLIC OF SOUTH AFRICA – COMPANIES Companies Act, No. 61 of 1973 21. Incorporation of associations not for gain (1)
Any association – (a) formed or to be formed for any lawful purpose; (b) having the main object of promoting religion, arts, sciences, education, charity, recreation, or any other cultural or social activity or communal or group interests; (c) which intends to apply its profits (if any) or other income in promoting its said main object; (d) which prohibits the payment of any dividend to its members; and (e) which complies with the requirements of this section in respect to its formation and registration, may be incorporated as a company limited by guarantee.
The memorandum of such association shall comply with the requirements of this Act and shall, in addition, contain the following provisions: (a) The income and property of the association whencesoever derived shall be applied solely towards the promotion of its main object, and no portion thereof shall be paid or transferred, directly or indirectly, by way of dividend, bonus,
or otherwise howsoever, to the members of the association or to its holding company or subsidiary: Provided that nothing herein contained shall prevent the payment in good faith of reasonable remuneration to any officer or servant of the association or to any member thereof in return for any services actually rendered to the association. [Para. (a) amended by s. 4 of Act No 59 of 1978.] (b) Upon its winding up, deregistration or dissolution, the assets of the association remaining after the satisfaction of all its liabilities shall be given or transferred to some other association or institution or associations or institutions having objects similar to its main object, to be determined by the members of the association at or before the time of its dissolution or, failing such determination, by the Court. (3)
The provisions of section 49 (1) (c) of this Act shall not apply to any such association. [Sub-s. (3) substituted by s. 3 of Act No 31 of 1986.]
Existing associations incorporated under section 21 of the repealed Act shall be deemed to have been formed and incorporated under this section.
2. The LIBNET Tariff Structure LIBNET TARIFF STRUCTURE: GENERAL INFORMATION 1.
The existing tariff structure is based upon the following: • an annual membership fee of R9720 incorporating 18000 enquiry equivalents (EEs); • a tariff of R0.49 per EE for the use of the system after the first 18000 EEs (0.43/ EE after 300000 EEs); • associated members do not pay any annual membership fee but they pay R0.70 per EE with a minimum usage of 200 EEs per month; • members receive credit for data contribution; and • all other products are priced separately. 2.
Advantages/disadvantages of present system
Advantages • Members pay in relation to their usage of the system; • Members who train their staff properly normally use fewer EEs per search. •
Disadvantages Because the variable cost (the actual cost of processing and retrieving the information) is only a small percentage of the EE tariff, the more frequent users are funding an unreasonably high portion of the LIBNET fixed costs. This aspect was also emphasized by LHA Management Consultants.
Users do pay for the usage of the system even if they don’t retrieve any information; Members do not use LIBNET to its full potential, because of the “CASH REGISTER SYNDROME.” Staff members are instructed to use LIBNET sparingly. It is difficult to determine the effect on the building of a cooperative database as a result of this behavior; The present system requires a complex accounting system; When users question their EE usage, it normally takes up a lot of time of skilled staff members to determine what the problem, if any, is; Users don’t know in advance what a specific search will cost them.
3. Financial Projections Depending on certain strategic decisions to be taken by the LIBNET Board, the income required to cover the expenditure during the next financial year will be approximately R5 million. The following tariff adjustments (based on the present tariff structure) would be required to provide that income: Service Unit EE tariff: Associate members
GT 18 000 GT 300000
: : : :
R27 000 R1,00 R0,45 R2,00
(R9720) (R0,49) (R0,43) (R0,70)
Most of the members will just cut back on their usage in order to stay within their budgets.
4. Alternatives It is important to determine what the requirements of a fair and just tariff structure must be. Some thoughts on that are: • • • •
A user must pay for the information received and not necessarily for the use of the system; The tariff structure must promote an increased use of the system; More frequent users must pay more than less frequent users; Less frequent users must pay more per transaction than more frequent users.
One very attractive alternative is to determine a fixed annual subscription fee with unlimited use of the system. The biggest problems would however be to find a formula to determine the annual subscription per member and to prevent abuse of this system. Aspects that could be considered for inclusion in the formula are:
the number of terminals per members; the size of the library based upon the number of books and periodicals or the staff complement; the number of databases (SACD, LC, UK, UCTD, etc.) being accessed; and systems functions (enquiry, cataloguing, etc.) being used.
3. Revising the LIBNET Tariff Structure LIBNET — CONSIDERATIONS FOR A REVISED TARIFF STRUCTURE
(Extracts from a document prepared by the ad-hoc sub-committee on tariffs)
Introduction At present tariffs are based entirely on usage and this, together with the GOVNET tariff structure, inhibits libraries from using LIBNET to its full potential. At the same time those libraries that are heavily involved and committed to LIBNET find themselves vulnerable to exploitation by libraries with a lesser commitment. Not only is their cataloguing data exploited, the identification of their holdings makes them the target of a greater degree of interlibrary loan requests than would otherwise be the case. At the same time libraries with a small contribution to LIBNET are protected from interlibrary loan requests and are able to exploit the libraries that contribute heavily. The libraries that are directly or indirectly funded by the Department of National Education contribute 95% to LIBNET’s running costs. … This situation will not change significantly when the original Memorandum of Agreement that established LIBNET comes to an end in two years time. While libraries using LIBNET are conscious of cost, most of the cost savings or the benefit of using LIBNET are hidden.
4. LIBNET Organizational Structure Table 1: LIBNET Organisational Structure Board of Directors Executive committee LIBNET User’s Committee Marketing & member development
Administration Operations & networked services
After many years as an academic library director, A.S.C. (Tony) Hooper now teaches Information Systems Management at the Victoria University of Wellington, where he is program director for the Master’s of Information Management degree.
Information System for a Volunteer Center: System Design for Not-for-Profit Organizations with Limited Resources Suresh Chalasani, University of Wisconsin, Parkside, USA Dirk Baldwin, University of Wisconsin, Parkside, USA Jayavel Souderpandian, University of Wisconsin, Parkside, USA
This case focuses on the development of information systems for not-for-profit volunteerbased organizations. Specifically, we discuss an information system project for the Volunteer Center of Racine (VCR). This case targets the analysis and design phase of the project using the Unified Modeling Language (UML) methodology, database modeling, and aspects of project management including scope and risk management. Students must decide how to proceed, including recommending an IT solution, managing risk, managing scope, projecting a schedule, and managing personnel. The rewards and special issues involved with systems for not-for-profit organizations will be revealed. This case can be used in a variety of courses, including systems analysis and design, database management systems, and project management.
Jeff McCoy, project lead of a four-person project team, was finishing requirements and project status documentation related to an information system for the Volunteer Center of Racine (VCR). Jeff, the information systems team, and the client needed to make some important decisions concerning the future of the project. Jeff needed to formulate his own opinion, but it was getting late. He promised his fiancé that they would see a movie at the new cinema tonight. Recently, his promises have gone unfulfilled. To this point, the VCR project had progressed smoothly. The focus of the project was the development of an application that helped the VCR place and track volunteers at various volunteer opportunities. The development team used the Unified Modeling Language (UML) to document the requirements of the system (Booch et al., 1999). A Gantt chart and a standardized project status report were used to record progress. The project status report contained fields to record the time, budget, people, process, and technology status of the project (Appendix B). A color code was used in each field: Green meant that the item was on task, yellow indicated concerns, and red signaled a danger. In addition to these fields, the team had an opportunity to specify their confidence in the project. A high score signaled that the project was moving along well and was within budget. The previously filed status reports were all very positive. Jeff and the other development team members, themselves, were volunteers at the Information Technology Practice Center (ITPC). The ITPC is a consortium of IT professionals from the local university and industry. The ITPC provided consulting services for not-for-profit agencies and small businesses. Some of the consulting engagements, including the VCR engagement, were performed on a pro bono basi