7072.tp.indd 1
3/17/10 2:52:52 PM
This page intentionally left blank
HANDBOOK ON BUSINESS INFORMATION SYSTEMS edited by
Angappa Gunasekaran University of Massachusetts, USA
Maqsood Sandhu University of Oulu, Finland
World Scientific NEW JERSEY
7072.tp.indd 2
•
LONDON
•
SINGAPORE
•
BEIJING
•
SHANGHAI
•
HONG KONG
•
TA I P E I
•
CHENNAI
3/17/10 2:52:53 PM
Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
HANDBOOK ON BUSINESS INFORMATION SYSTEMS Copyright © 2010 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN-13 978-981-283-605-2 ISBN-10 981-283-605-2
Typeset by Stallion Press Email:
[email protected]
Printed in Singapore.
Jhia Huei - Hdbook on Business Info Sys.pmd 1
5/6/2010, 4:12 PM
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Preface The contemporary state of rapid information technology (IT) development has led to the formation of a global village that has swiftly transformed business, trade and industry. It has impacted business management processes in health care delivery, data mining management in industry, and created business information systems as strategic tools to manage businesses. Information technology has led to the development of sophisticated supply chain models, and the evolution of comprehensive business information systems. Overall, the business models have changed from a traditional hierarchical control to more customers oriented via e-business models with a network of information systems. This handbook explores the need of business information management in different contexts and sectors of businesses. The competitive environment dictates intense rivalry among the participants in the market. Similar value propositions and products coupled with broad range of choices available to the customer both in terms of delivery and product differentiation has made business information systems and its management as one of the key drivers for business success and sustainable competitive advantage. The Handbook on Business Information Systems is divided into six major parts dealing with different aspects of information management systems in various business scenarios. A total of 37 chapters cover the vast field of business information systems, focusing particularly on developing information systems to capture and integrate information technology together with the people and their businesses. A brief introduction of each part is summarized below. Part I of the book, Health Care Information Systems, consists of four chapters that focus on providing global leadership for the optimal use of health care IT. It provides knowledge about the best use of information systems for the betterment of health care services. These chapters deal with healthcare supply chain information systems, the role of CIO in the development of healthcare information systems and information systems in handling patients’ complaints. Part II, Business Process Information Systems, composed of nine chapters, extends the previous theory in the area of process development by recognizing that improvements in intraorganizational information systems need to be complemented by corresponding improvements in inter-organizational processes. The chapters cover topics such as modeling and analysis of business processeses, reengineering business processes, implications of culture on logistics and information systems, performance measures in information systems, socio-management systems, knowledge management systems, and risk management in ERP.
v
March 15, 2010
vi
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Preface
With eight chapters, Part III deals with Industrial Data and Management Systems and captures the main challenges faced by the industry, such as the changes in the operations paradigm of manufacturing and service organizations. These chapters are comprehensive in terms of covering significant topics in business information systems including information systems in sustainability, strategies for enhancing innovation culture, decision support system for line balancing, innovation in logistics using RFID, implications of RFID, implications of information systems on operational efficiency, interactive technology for strategic planning and key performance indicators in information systems evaluation. Next, Part IV Strategic Business Information Systems with five chapters discusses the use of information technology in small industries and analysis of digital business ecosystems. The following are the important topics considered by these chapters: applications of IT/IS in small companies, relationships between information systems, business, marketing and CRM, transfer for business information systems, digital business ecosystem analysis and information contents of accounting numbers. Part V with five chapters on Information Systems in Supply Chain Management deals with different challenges and opportunities in the field and discusses supply chain performance along with applications of various technologies. These chapters provide an excellent overview of the implications of supply chain enabling technologies, supply chain management, managing supply chain performance in SMEs, information systems in service learning, and RFID applications in supply chain. Finally, Part VI, Evaluation of Business Information Systems, discusses the adoption of systems development methodologies and the security pattern of the business systems along with useful mathematical models. This part has six chapters dealing with tools for decision-making in IT/IS, application of quantitative models in information management, measurement challenges in IT/IS, object-oriented meta-computing, B2B architecture using web-services technology, and the role of computer simulation in supply chain management. An edited book of this nature can provide useful conceptual frameworks, managerial challenges including strategies and tactics, technologies, and practical knowledge and applications to a variety of audiences including academics, research students and practitioners interested in the management of business information systems. The editors are most grateful to the authors who have chapters in this handbook and have gone through several cycles of revisions and for their continued cooperation in finalizing the chapters. We are thankful to the excellent reviews of over 100 reviewers who have read chapters and helped to improve their quality. The authors are deeply indebted to the editorial team of the publisher for their highly constructive comments that have greatly enhanced the quality of this work. Without their timely support and insightful editorial changes, this handbook would not have been completed. We are especially grateful to Ms. Shalini, the in-house editor for her prompt responses and co-ordination throughout the compiling of this manuscript.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Preface
vii
Also, we are thankful to the publisher, who has been a great source of a inspiration in completing this book project in a timely manner. Finally, our heartfelt thanks go to our families, for their patience and support over the last two years. Angappa Gunasekaran University of Massachusetts — Dartmouth, USA Maqsood Sandhu University of Oulu, Finland and UAE University, UAE
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Editor Biographies
Angappa Gunasekaran is a Professor of Operations Management and the Chairperson of the Department of Decision and Information Sciences in the Charlton College of Business at the University of Massachusetts (North Dartmouth, USA). Previously, he has held academic positions in Canada, India, Finland, Australia and Great Britain. He has BE and ME from the University of Madras and a PhD from the Indian Institute of Technology. He teaches and conducts research in operations management and information systems. He serves on the Editorial Board of 20 journals and edits several journals. He has published over 200 articles in journals, 60 articles in conference proceedings and edited three books. In addition, he has organized several conferences in the emerging areas of operations management and information systems. He has extensive editorial experience that includes the guest editor of many high profile journals. He has received outstanding paper and excellence in teaching awards. His current areas of research include supply chain management, enterprise resource planning, e-commerce, and benchmarking. He is also the Director of Business Innovation Research Center at the University of Massachusetts — Dartmouth. Dr. Maqsood Sandhu is Associate Professor at Oulu Business School, University of Oulu, Finland. Currently, he is working at the Department of Management, College of Business and Economics at United Arab Emirates University, Al Ain. He earned a PhD from the Swedish School of Economics and Business Administration in Management. Dr. Sandhu has been working over five years in projectbased industry. He has about 15 international journal articles and book chapters. He has presented over 50 papers and published approximately 40 articles in international conferences. Currently, he is interested in doing research in the areas of project management, knowledge management and entrepreneurship. He is also the Head of Innovation Laboratories at the Emirates Centre for Innovation and the Entrepreneurship.
ix
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
Editor Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
Part I: Health Care Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Chapter 1
Chapter 2
Chapter 3
Healthcare Supply Chain Information Systems VIA Service-Oriented Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . ¨ Sultan N. Turhan and Ozalp Vayvay The Role of the CIO in the Development of Interoperable Information Systems in Healthcare Organizations . . . . . . . . . Ant´onio Grilo, Lu´ıs Velez Lap˜ao, Ricardo Jardim-Goncalves and Virgilio Cruz-Machado
3
25
Information Systems for Handling Patients’ Complaints in Health Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zvi Stern, Elie Mersel and Nahum Gedalia
47
How to Develop Quality Management System in a Hospital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ville Tuomi
69
Part II: Business Process Information Systems . . . . . . . . . . . . . . . . . . . . . . .
91
Chapter 4
Chapter 5
Modeling and Managing Business Processes . . . . . . . . . . . . . Mohammad El-Mekawy, Khurram Shahzad and Nabeel Ahmed
Chapter 6
Business Process Reengineering and Measuring of Company Operations Efficiency . . . . . . . . . . . . . . . . . . . . . . Nataˇsa Vujica Herzog
117
Value Chain Re-Engineering by the Application of Advanced Planning and Scheduling . . . . . . . . . . . . . . . . . . . Yohanes Kristianto, Petri Helo and Ajmal Mian
147
Cultural Auditing in the Age of Business: Multicultural Logistics Management, and Information Systems . . . . . . . . . Alberto G Canen and Ana Canen
189
Chapter 7
Chapter 8
xi
93
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
xii Contents
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Chapter 13
Efficiency as Criterion for Typification of the Dairy Industry in Minas Gerais State . . . . . . . . . . . . . . . . . . . . . . . . . . Luiz Antonio Abrantes, Adriano Provezano Gomes, Marco Aur´elio Marques Ferreira, Antˆonio Carlos Brunozi J´unior and Maisa Pereira Silva
199
A Neurocybernetic Theory of Social Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Masudul Alam Choudhury
221
Systematization Approach for Exploring Business Information Systems: Management Dimensions . . . . . . . . . . Albena Antonova
245
A Structure for Knowledge Management Systems Assessment and Audit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joao Pedro Albino, Nicolau Reinhard and Silvina Santana
269
Risk Management in Enterprise Resource Planning Systems Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Davide Aloini, Riccardo Dulmin and Valeria Mininno
297
Part III: Industrial Data and Management Systems . . . . . . . . . . . . . . . . . .
321
Chapter 14
Asset Integrity Management: Operationalizing Sustainability Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. M. Chandima Ratnayake
Chapter 15
How to Boost Innovation Culture and Innovators? . . . . . . . . . Andrea Bikfalvi, Jari Jussila, Anu Suominen, Jussi Kantola and Hannu Vanharanta
Chapter 16
A Decision Support System for Assembly and Production Line Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. S. Simaria, A. R. Xambre, N. A. Filipe and P. M. Vilarinho
Chapter 17
An Innovation Applied to the Simulation of RFID Environments as Used in the Logistics . . . . . . . . . . . . . . . . . . . Marcelo Cunha De Azambuja, Carlos Fernando Jung, Carla Schwengber Ten Caten and Fabiano Passuelo Hessel
323 359
383
415
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Contents
Chapter 18
Chapter 19
Chapter 20
Customers’ Acceptance of New Service Technologies: The Case of RFID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alessandra Vecchi, Louis Brennan and Aristeidis Theotokis Operational Efficiency Management Tool Placing Resources in Intangible Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Claudelino Martins Dias Junior, Osmar Possamai and Ricardo Gon¸calves
xiii
431
457
Interactive Technology Maps for Strategic Planning and Research Directions Based on Textual and Citation Analysis of Patents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elisabetta Sani, Emanuele Ruffaldi and Massimo Bergamasco
487
Determining Key Performance Indicators: An Analytical Network Approach . . . . . . . . . . . . . . . . . . . . . . . Daniela Carlucci and Giovanni Schiuma
515
Part IV: Strategic Business Information Systems . . . . . . . . . . . . . . . . . . . . .
537
Chapter 21
Chapter 22
Chapter 23
Chapter 24
The Use of Information Technology in Small Industrial Companies in Latin America — The Case of the Interior of S˜ao Paulo, Brazil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ot´avio Jos´e De Oliveira and Guilherme Fontana
539
Technology: Information, Business, Marketing, and CRM Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fernando M. Serson
565
Transfer of Business and Information Management Systems: Issues and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . R. Nat Natarajan
585
Chapter 25
Toward Digital Business Ecosystem Analysis . . . . . . . . . . . . . Aurelian Mihai Stanescu, Lucian Miti Ionescu, Vasile Georgescu, Liviu Badea, Mihnea Alexandru Moisescu and Ioan Stefan Sacala
Chapter 26
The Dynamics of the Informational Contents of Accounting Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Akinloye Akindayomi
607
639
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
xiv Contents
Part V: Information Systems in Supply Chain Management . . . . . . . . . . Chapter 27
Supply Chain Enabling Technologies: Management Challenges and Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . Damien Power
653 655
Chapter 28
Supply Chain Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Avninder Gill and M. Ishaq Bhatti
675
Chapter 29
Measuring Supply Chain Performance in SMES . . . . . . . . . . Maria Argyropoulou, Milind Kumar Sharma, Rajat Bhagwat, Themistokles Lazarides, Dimitrios N. Koufopoulos and George Ioannou
699
Chapter 30
Information Sharing in Service Supply Chain . . . . . . . . . . . . . Sari Uusipaavalniemi, Jari Juga and Maqsood Sandhu
717
Chapter 31
RFID Applications in the Supply Chain: An Evaluation Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Valerio Elia, Maria Grazia Gnoni and Alessandra Rollo
737
Part VI: Tools for the Evaluation of Business Information Systems . . . .
763
Chapter 32
Chapter 33
Chapter 34
Tools for the Decision-making Process in the Management Information System of the Organization . . . . . . . . . . . . . . . . . . Carmen De Pablos Heredero and M´onica De Pablos Heredero
765
Preliminaries of Mathematics in Business and Information Management . . . . . . . . . . . . . . . . . . . . . . . . . . . Mohammed Salem Elmusrati
791
Herding Does Not Exist or Just a Measurement Problem? A Meta-Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nizar Hachicha, Amina Amirat and Abdelfettah Bouri
817
Chapter 35
Object-Oriented Metacomputing with Exertions . . . . . . . . . . Michael Sobolewski
Chapter 36
A New B2B Architecture Using Ontology and Web Services Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . Youcef Aklouf
889
The Roles of Computer Simulation in Supply Chain Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jia Hongyu and Zuo Peng
911
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
945
Chapter 37
853
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Part I Health Care Information Systems
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Chapter 1
Healthcare Supply Chain Information Systems Via Service-Oriented Architecture SULTAN N. TURHAN Department of Computer Engineering, Galatasaray University C¸ıraˇgan cad. No: 36 34357, Ortak¨oy Istanbul, T¨urkiye
[email protected] ¨ OZALP VAYVAY Department of Industrial Engineering, Marmara University ´ G¨oztepe Campus, 34722 Kadık¨oy Istanbul, T¨urkiye
[email protected]
Healthcare supply chain management differs from other applications in terms of its key elements. The misalignment, high costs for healthcare providers and heavy dependence on third parties, distributors, and manufacturers are the main trouble making issues for the healthcare supply chain. At the same time, some of the supply chain components of the health sector have a different position compared to the other materials that are taking place in the other supply chains. In particular, the specific consumables used in the surgical operations bear a significant importance in terms of the usage and the costs. In some cases, the doctors may not have a strict opinion on the exact quantity of the consumables they will use during the operation before starting it. On the other hand, it is not always possible to have all these materials in the stock because of their high cost. Moreover, due to the inefficiencies in the social security system in Turkey, the social security institutions do not always accept to pay the price of the materials used. Worse still, the related information generally reaches with a significant delay to the hospital management. Keywords: Healthcare; supply chain management (SCM); service oriented architecture (SOA); vendor managed inventory (VMI).
1. Introduction Ferihan La¸cin Hospital is a medium-scale 53-bed hospital. Four different groups of materials are used during a work day: 1. Ordinary Supplies: These are the materials used in a hotel management like paper towels, bedclothes, soap, cleansing agent, or disinfectants. This kind of material is not included in the scope of this research. 3
March 15, 2010
4
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
2. Drugs: This group consists of the typical drugs used in a hospital including anesthetic drugs. 3. Medical Materials: This group contains medical disposable items, surgical dressing, medical papers used in medical devices such as electrocardiographs, etc. 4. Special Surgical Materials and Equipment: This group contains special surgical materials such as stents, thin tube inserted into a tubular structure (e.g., a blood vessel) to hold it open or remove a blockage, or prosthetic and orthopedic products. This group has a difference from the other groups. For the other groups, the hospital’s employers such as doctors, practitioners, nurses, and technicians have the opportunity to decide the quantity of materials that they are going to consume while working. The decision on the use of materials by this group may only be made by specialist(s) like surgeon(s); however, even they, most of the time, do not have any exact idea on how many they will consume during the operation. They need to make this decision during the operation. Because of this constraint, the management of supply chain of these materials is completely different from the others. Currently, in the hospital, there are neither any rules defined for the inventory management nor for a complete SCM. There is only a pharmacist and a staff member working in the hospital’s pharmacy. All purchasing affairs for all materials are done by them. In fact, working understaffed is a very common feature that we face in small- and medium-sized hospitals. The pharmacist’s mission is to control and audit all the drugs used all over the hospital, especially the anesthetic drugs. They are also responsible to assure the presence of the medical materials. It is also the pharmacist’s responsibility to update supplier information, net cost of procurement, batch sizes, and so forth. On the other hand, their performance measure is to provide the correct materials, at the required time. In healthcare cases, time is a big constraint because even a delay of a second may cause a big problem for the life of a person. The staff working with them are not qualified personnel who have a formation of pharmaceutics. They are responsible to manage the orders and the payments, control the bills, enter the bills’ information to the information system of the hospital, check the boxes, and get the proposals from the suppliers. The proposal must contain the information about medical supplies’ unit prices and the conditions of payment. All the proposals are examined by pharmacist and purchasing director, and they decide together on the supplier from which the materials will be purchased. A simple order process done by the staff consists of following steps: 1. Taking the proposal from the different suppliers. 2. Introduce the candidate suppliers’ proposal to pharmacist and purchasing director to determine the right one. 3. Call the supplier determined by the pharmacist and the purchasing director via phone, fax, and e-mail. 4. Make the order.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
5
This system has several weaknesses. First, there are no prescribed parameters to define the quantity of such orders. The pharmacist decides the quantity of the materials intuitively. This process is also time consuming because the pharmacist must check the materials one by one every day and the staff must spend hours and hours to communicate with the suppliers to get the proposals, and place the orders. On the other hand, the system has also a bad effect on the suppliers’ side. They are always enforced to provide the necessary item in a short period, sometimes in a day. They always have to be ready to answer the hospital’s demands quickly and this causes a big competition in the market. The last weakness is caused by the payment schedule of Turkish government. The major part of the healthcare costs in Turkey is still being covered by the government. The market share of the private health insurance companies that exist in this sector is not significant. Today, only 1 out of 162 people has a private health insurance. However, the Turkish government can only disburse the payments to private hospital in 3 months. Under these circumstances, the hospitals also propose to suppliers to make payment with a delay of 3 months. Most of the medical materials are very expensive with a big cost; both suppliers and hospitals face the difficulties to manage their financial situation. All these problems are getting worse in the case of special surgical materials and equipments. On the other hand, the competition among the small- and medium-scale hospitals is growing intensely; therefore, all hospitals have to improve their quality of healthcare services while reducing their operational cost. The research has started with analysis of business process and system requirements of this specific application in a hospital.a Then, a new idea has been developed to control the purchases and consume the special medical and surgical devices especially during the operation. A telemedicine application is implemented between the hospital information system and government system to provide a real-time online observation for the surgical operation. On the other hand, all processes have been designed according to service-oriented architecture (SOA) because SOA provides a much more agile environment for process orchestration, for integration across applications, and for collaboration between users. Each process has been defined as a Web Service (WS). With this architecture, another problem may be possible: The structure of the information exchanged. Allowing cooperation among distributed and heterogeneous applications is a major need for the current system. In this research, we try to model an efficient pharmaceutical SCM to eliminate the problems cited above. The new system is developed to optimize inventory control, reduce material handling cost, and manage the balance of payment among the government and the suppliers. SCM is a strategy for optimizing the overall supply chain by sharing information among material suppliers, manufacturers, distributors, and retailers (Dona et al., 2001). Our supply chain consists of suppliers, hospitals, and the government. The a Ferihan La¸cin Hospital, Istanbul, Turkey. www.ferihanlacin.com
March 15, 2010
6
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
key element of SCM is information sharing (Dona et al., 2001). Information sharing improves collaboration among the supply chain to manage material flow efficiently and reduces inventory cost (EHCR Committee, 2001). Therefore, we decided to adopt a vendor-managed inventory (VMI) model to optimize the inventory control and then reengineer the processes according to a new architecture SOA. Besides, we try to propose a different management style for the usage of special surgical materials and equipment and their SCM. In Sec. 2, the suggested process remodeling will be revealed item by item while defining VMI. Section 3 illustrates all the technologies used. Finally, benefits of the developed system will be discussed. 2. Process Remodeling 2.1. Constraints It is very difficult to design an efficient pharmaceutical system to improve the Quality of Services (QoS) given to the patients, while there are not any rules determined by the hospital management. Of course, the hospital’s management takes the TQM rules seriously, but the processes are never examined, and never documented. On the other hand, the nature of supply chain is very complex. The first objective of this supply chain is not only to lower procurement cost and improve cash flow, but also to assure the appropriate drug, medical materials, or special surgical materials at the right time, in the right place. Another important issue is the preservation of the drugs. Each drug has an expiry date and some of them, for example, anesthetic drugs need to be more safely preserved. The innovation and changes in the drug sector are very often and it is very usual to substitute a drug not found with an equivalent one. Then, an information system adoption is needed but information sharing (IS) adoption in healthcare affects and is affected by human and organizational actors (Vasiliki et al., 2007). Thus, it is not only the information systems that need to be put in place, but also an effective process solution for how to transfer demand is needed (Riikka et al., 2002). As there is no efficient and effective inventory control in the hospital, we decided to adopt our new SCM system modeled according to SOA principles with VMI techniques. Before implementing our system, the information shared between supplier and hospital, between hospital and government, and between supplier and government was insufficient. Eventually, this results in a high rate of emergency order calls, high stock levels, bad balance of payments, and, of course, patient dissatisfaction. To solve this problem, we first started to implement a VMI system in hospital warehouses. We show that by effectively harnessing the information now available, one can design and operate the supply chain much more efficiently and effectively than ever before.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
7
2.2. What is VMI? VMI approach can improve supply chain performance by decreasing inventoryrelated costs and increasing customer service. Unlike a traditional supply chain wherein each member manages its own inventories and makes individual stocking decision, VMI is a collaborative initiative where a downstream customer (a hospital in our case) shifts the ownership of inventories to its immediate upstream supplier and allows the supplier to access its demand information in return. In particular, a VMI process involves the following two steps: (1) a downstream customer provides demand information to its immediate upstream supplier and leaves the stocking decisions to that supplier; and (2) the upstream supplier has the ownership of the inventories till the inventories are shipped to the customer and bears the risk of demand uncertainty. It is not difficult to see that the VMI structure promotes collaborations between suppliers and customers through information sharing and business process reengineering. VMI is an alternative for the traditional order-based replenishment practices. VMI changes the approach for solving the problem of supply chain coordination. Instead of just putting more pressure on suppliers’ performance by requiring ever faster and more accurate deliveries, VMI gives the supplier both responsibility and authority to manage the entire replenishment process. The customer company (a hospital in our case) provides the supplier access to inventory and demand information and sets the targets for availability. Thereafter, the supplier decides when and how much to deliver. The measure for supplier’s performance is no more delivery time and preciseness; it is availability and inventory turnover. This is a fundamental change that affects the operational mode for both the customer and at the supplier company. Therefore, the advantages to both parties must be evident to make the shift to VMI happen (Lee and Whang, 2000). We cannot deny the advantages of VMI in our case. Before implementing VMI, it was the pharmacist’s mission to manage the inventory which resulted in inefficiencies. The adoption of VMI is started by contracts among suppliers and hospital. These contracts are realized not only on the paper, but also via WSs too (which will be described in the next section). In the contracts, the role of controlling inventory level of each drug or medical supply is given for an appropriate supplier by defining the unit price and payment schedule. With this system, the suppliers’ experts control the stock level instead of the pharmacist. The new system allows the pharmacists do their own job, and also create the time available for the supplier to plan deliveries. It is obvious that the more time the supplier has for planning, the better it is able to serve the hospital and optimize operations. The other problem faced in hospital inventory management is having no proper classification schema. There are so many different drugs and medical supply. Each of them is produced by different manufacturer and may be used as a substitute for
March 15, 2010
8
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
a different one. It is very inefficient to manage all these products without a proper classification because it is not possible to make a contract for each of them. We already mentioned that the stock control will be done by suppliers’ experts. Here, the main question is who will decide for order quantity. We did not leave the decision of order quantity either to supplier’s expert or to pharmacist. Order quantities are calculated by the information system based on demand forecasting and safety stock levels. The hospital has its own information system to manage the stock level of each product. Our system will use information produced by this system to get order quantities. 2.3. Information and Document Sharing IS is a collaborative program in which the downstream firm (referred to as the hospital herein) agrees to provide demand and inventory status in real time to the upstream firm (referred to as a supplier herein) (Lee and Whang, 2000). VMI provides a closer collaboration between the supplier and hospital in our case. That is why the hospital must be able to reengineer its process through real-time information sharing, enabled by electronic data interchange (EDI). With this system, we propose to provide an integrate information sharing between hospital and suppliers, and between hospital and government. By sharing information about product usage between them, it is much easiest to keep the inventory level at a proper level. Besides, the system must have the ability to keep the logs of products, insurance codes, and information about new drugs. We designed the system to be accessible in real time, and to be integrated via WSs with any service provider including government. With the new architecture, all the processes are remodeled according to SOA principles. While remodeling the processes, we took into consideration the WSs policy defined by the Turkish government, the standards, and protocols produced by Health Level Seven which is one of several American National Standards Institute (ANSI)-accredited Standards Developing Organizations (SDOs) operating in the healthcare arena, for a particular healthcare domain such as pharmacy, medical devices, imaging or insurance (claims processing) transactions, the codes defined by Anatomical Therapeutic Chemical Classification System with Defined Daily Doses (ATC/DDD) index published by WHO, and by National Information Bank (UBB) published by Ministry of Health of Turkey. On the other hand, the hospital’s traditional method of exchanging and processing orders and orders’ documents via phone or fax machine results with time inefficiency and high rate of error. The process depends totally on staff’s performance, which is not acceptable in the healthcare case. The system offers the to suppliers to get the order requirements, to control the inventory level of hospital’s central warehouse and exchange documents in XML formats, via WSs. With this system model, order processing of supply chain participants can be enhanced significantly.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
9
2.4. What is Service Oriented Architecture (SOA)? With the growth of real-time computing and communication technologies like the Internet, batch interfaces were posing a challenge. When the latest information about a given business entity was not updated in all dependent systems, it resulted in a loss of business opportunity, decreased customer satisfaction, and increasing problems. SOA may be seen as the new face of enterprise application integration (EAI). We can also define SOA as a business-driven information technology (IT) architectural approach that helps businesses innovate by ensuring that IT systems can adapt quickly, easily, and economically to support rapidly changing business needs. SOA is not a technology. It is an architectural approach built around existing technologies. SOA advocates a set of practices, disciplines, designs, and guidelines that can be applied using one or more technologies and being an architectural concept, it is flexible enough to lend itself to multiple definitions. SOA offers a unique perspective into business that was previously unavailable: It offers a realtime view of what is happening in terms of transactions, usage, and so forth. In anticipation of the discovery of new business opportunities or threats, the SOA architectural style aims to provide enterprise business solutions that can extend or change on demand. SOA solutions are composed of reusable services, with well-defined, published and standards-compliant interfaces. SOA provides a mechanism for integrating existing legacy applications regardless of their platform or language. The key element of SOA is the service. A service can be described as “a component capable of performing a task” (David and Lawrence, 2004). Although a service can be seen as a task or an activity, it is more complicated than these concepts. This is due to the fact that every service has a contract, an interface, and an implementation routine. Josuttis (2007) states that a service has the following attributes: • Self-contained: Self-contained means independent and autonomous. Although there can be exceptions, a service should be self-contained. In order for the services to be self-contained, their inter-dependencies should be kept in a minimum level. • Coarse-grained: It indicates the implementation detail level of services for consumers. Implementation details are hidden for a service consumer because the consumer does not care about such details. • Visible/Discoverable: A service should be visible and easily reachable. This is important also for reusability which means that a service can be used multiple times in multiple systems. • Stateless: Services, ideally, but not always, should be stateless. This means that a service request does not affect another request because service calls do not hold invocation parameters and execution attributes in a stateless service.
March 15, 2010
10
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
• Idempotent: Idempotent means the ability of redo or rollback. In some cases, while a service is executing, a bad response can be returned to the service consumer. In such a case, service consumers can rollback or redo the service execution. • Composable: For a composable service, the service can contain several subservices, where they can be separated from the main service. A composable service can call another composable service. • QoS and Service Level Agreement (SLA)-Capable:A service should provide some non-functional requirements such as runtime performance, reliability, availability, and security. These requirements represent QoS and SLA. • Pre- and Post-conditions: Pre- and post-conditions specify the constraints and benefits of the service execution. Pre-condition represents the state before the service execution. Post-condition represents the state after the service execution. • Vendor Diverse: SOA is neither a technology nor a product. It is also platform (or vendor) independent. This means that it can be implemented by different products. When calling a service, one does not need to be familiar with the technology used for the service. • Interoperable: Services should be highly interoperable. They can be called from any other systems. Interoperability provides the ability of different systems and organization to work together. In other words, services can be called from any other system regardless of the types of environment for them. The second important issue is to define explicitly two key roles in an SOA: the service provider and service consumer. Service provider publishes a service description and provides the implementation for the service, whereas service consumer can either use the uniform resource identifier for the service description directly or can find the service description in a service registry and bind and invoke the service. In Fig. 1, the relationship between a service provider and a service consumer is illustrated. As we mentioned above, a service is a software resource with an externalized service description. This service description is available for searching, binding, and invocation by a service consumer. The service provider realizes the service description implementation and also delivers the QoS requirements to the service consumer. Services should ideally be governed by declarative policies and thus support a dynamically re-configurable architectural style. The services can be used across internal business units or across the value chains among business partners in a fractal realization pattern. Fractal realization refers to the ability of an architectural style to apply its patterns and the roles associated with the participants in its interaction model in a composite manner. It can be applied to one tier in architecture and to multiple tiers across the enterprise architecture. That is why defining the services according to SOA concepts must be the most
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
Figure 1.
11
c IBM). SP&SC relationship (
crucial step while modeling a system. Conceptually, there are three major levels of abstraction within SOA: • Operations: Transactions that represent single logical units of work (LUWs). Execution of an operation will typically cause one or more persistent data records to be read, written, or modified. SOA operations are directly comparable to objectoriented (OO) methods. They have a specific, structured interface, and return structured responses. Just as for methods, the execution of a specific operation might involve invocation of additional operations. • Services: Represent logical groupings of operations. • Business Processes: A long running set of actions or activities performed with specific business goals in mind. Business processes typically encompass multiple service invocations. According to Ali Arsanjani, PhD, Chief Architect, SOA and WSs Center of Excellence, IBM, the process of service-oriented modeling and architecture consists of three general steps: identification, specification and realization of services, components and flows (typically, choreography of services). • Service identification: This process consists of a combination of top-down, bottom-up, and middle-out techniques of domain decomposition, existing asset analysis, and goal-service modeling. In the top-down view, a blueprint of business use cases provides the specification for business services. This top-down process is often referred to as domain decomposition, which consists of the decomposition of the business domain into its functional areas and subsystems, including its flow or process decomposition into processes, subprocesses, and high-level business use cases. These use cases often are very good candidates for business services exposed at the edge of the enterprise, or for those used within the boundaries of the enterprise across lines of business.
March 15, 2010
12
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
In the bottom-up portion of the process or existing system analysis, existing systems are analyzed and selected as viable candidates for providing lower cost solutions for the implementation of underlying service functionality that supports the business process. In this process, you analyze and leverage APIs, transactions, and modules from legacy and packaged applications. In some cases, componentization of the legacy systems is needed to re-modularize the existing assets for supporting service functionality. The middle-out view consists of goal-service modeling to validate and unearth other services not captured by either top-down or bottom-up service identification approaches. It ties services to goals and subgoals, key performance indicators, and metrics. • Service classification or categorization: This activity is started when services have been identified. It is important to start service classification into a service hierarchy, reflecting the composite or fractal nature of services: services can and should be composed of finer-grained components and services. Classification helps determine composition and layering, as well as coordinates building of interdependent services based on the hierarchy. Also, it helps alleviate the service proliferation syndrome in which an increasing number of small-grained services get defined, designed, and deployed with very little governance, resulting in major performance, scalability, and management issues. More importantly, service proliferation fails to provide services, which are useful to the business, that allow for the economies of scale to be achieved. • Subsystem analysis: This activity takes the subsystems found above during domain decomposition and specifies the interdependencies and flow between the subsystems. It also puts the use cases identified during domain decomposition as exposed services on the subsystem interface. The analysis of the subsystem consists of creating object models to represent the internal workings and designs of the containing subsystems that will expose the services and realize them. The design construct of “subsystem” will then be realized as an implementation construct of a large-grained component realizing the services in the following activity. • Component specification: In the next major activity, the details of the component that implement the services are specified: • • • • •
Data Rules Services Configurable profile Variations
Messaging and events specifications and management definition occur at this step. • Service allocation: Service allocation consists of assigning services to the subsystems that have been identified so far. These subsystems have enterprise components that realize their published functionality. Often you make the simplifying
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
13
assumption that the subsystem has a one-to-one correspondence with the enterprise components. Structuring components occurs when you use patterns to construct enterprise components with a combination of: • • • • •
Mediators Fa¸cade Rule objects Configurable profiles Factories
Service allocation also consists of assigning the services and the components that realize them to the layers in SOA. Allocation of components and services to layers in the SOA is a key task that requires the documentation and resolution of key architectural decisions that relate not only to the application architecture but also to the technical operational architecture designed and used to support the SOA realization at runtime. • Service realization: This step recognizes that the software that realizes a given service must be selected or custom-built. Other options that are available include integration, transformation, subscription, and outsourcing of parts of the functionality using WSs. In this step, which legacy system module will be used to realize a given service and which services will be built from the “ground-up” will be decided. Other realization decisions for services other than business functionality include security, management, and monitoring of services. In reality, projects tend to capitalize on any amount of parallel efforts to meet closing windows of opportunity. Top-down domain decomposition (process modeling and decomposition, variation-oriented analysis, policy and business rules analysis, and domain specific behavior modeling (using grammars and diagrams) is conducted in parallel with a bottom-up analysis of existing legacy assets that are candidates for componentization (modularization) and service exposure. To catch the business intent behind the project and to align services with this business intent, goal-service modeling is conducted. In SOA terms, a business process consists of a series of operations which are executed in an ordered sequence according to a set of business rules. The sequencing, selection, and execution of operations are termed service or process choreography. Typically, choreographed services are invoked to respond to business events. Therefore, we have to model our business processes according to service concepts. SOA and design are another concepts from the other analysis and modeling. Service-oriented modeling requires additional activities and artifacts that are not found in traditional OO analysis and design. Experience from early SOA implementation projects suggests that existing development processes and notations such as Object Oriented Analysis and Design (OOAD), Enterprise Architecture (EA), and business process management (BPM) only cover part of the requirements needed to
March 15, 2010
14
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
support the SOA paradigm. While the SOA approach reinforces well-established, general software architecture principles such as information hiding, modularization, and separation of concerns, it also adds additional themes such as service choreography, service repositories, and the service bus middleware pattern, which require explicit attention during modeling (Olaf et al., 2004). There is one more important point that we have to mention here. When one starts an SOA project, the first thing that comes to mind is to define WSs. Yet, the SOA research road map defines several roles. The service requester or client and provider must both agree on the service description (Web Service Definition Language — WSDL definition) and semantics that will govern the interaction between them for WSs to interact properly in composite applications. A complete solution must address semantics not only at the terminology level, but also at the levels that WSs are used and applied in the context of business scenarios — the business process and protocol levels. Thus, a client and provider must agree on the implied processing, context, and sequencing of messages exchanged between interacting services that are part of a business process. In addition to the classical roles of service client and provider, the road map also defines the roles of service aggregator and operator. Service modeling and service-oriented engineering — service-oriented analysis, design and development techniques, and methodologies — are crucial elements for creating meaningful services and business process specifications. These are an important requirement for SOA applications that leverage WSs and apply equally well to all three service plans. SOA should abstract away the logic at the application or business level, such as order processing, from non-business-related aspects at the system level, such as the implementation of transactions, security, and reliability policies. This abstraction should enable the composition of distributed business processes and transactions. The software industry now widely implements a thin Simple Object Access Protocol (SOAP)/WSDL/Universal Description Discovery and Integration (UDDI) veneer atop existing applications or components that implement the WSs, but this is insufficient for commercial-strength enterprise applications. Unless the component’s nature makes it suitable for use as a WS (and most are not) properly delivering components’ functionality through a WS takes serious redesign effort (Papazoglou et al., 2007). On the other hand, our job would not be complete by only defining these services according to SOA. While migrating to SOA, there are some other points that should be taken into consideration. These include: • Adoption and Maturity Models: Every different level of adoption has its own unique needs; therefore, the maturity level of enterprise in the adoption of SOA and WSs must be determined at the beginning. • Assessments: During the migration, controls and assessment must be done after each step.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
15
• Strategy and Planning Activities: The steps, tools, methods, technologies, standards, and training which must be taken into account must be declared at the beginning. Therefore, a roadmap must be represented. • Governance: SOA has the ability to use legacy application’s API as a service. Every API must be examined to decide which one is eligible? Every service should be created with the intent to bring value to the business in some way. 3. Case Study Our main goal in this study is to implement the application of both new working areas to transform the supply chain to a single, integrated model to improve patient care and customer service, while decreasing procurement costs. The first one is to reengineer business processes with SOA. The second one is to be able to make an SOA modeling. As stated in the paper of Zimmerman et al. (2004) SOA modeling is a very new area, and there are no defined strict rules on this subject. Therefore, we first debuted by modeling VMI that we described above according to SOA. All the departments request necessary items and the items are delivered from hospital’s warehouse to the requesting departments. As a requirement of the definition of VMI, the supplier needs to manage the hospital’s overall inventory control system and order processing system and then makes the order delivery schedule according to the contract signed by the supplier and the hospital. In a traditional VMI system, the supplier takes both responsibility and authority to manage the entire replenishment process. The customer company provides the supplier access to the inventory and demand information and sets the targets for availability (Riikka et al., 2002). Here, instead of allowing the supplier to intervene directly to the legacy system used by the hospital, we believe that it would be more appropriate to produce the information needed by the supplier by a service architecture and orchestration on the system. Although one may think that such a business process modeling may be realized by other modeling types like OOD or EA, it is certain that Service OrientedArchitecture Design (SOAD) will be more efficient in defining the human-based task that we eventually need in this modeling. SOAD must be predominantly process, rather than use-case driven. The method is no longer use case-oriented, but driven by business events and processes. Use case modeling comes in as a second step on a lower level. In the SOA paradigm, business process choreography, maintained externally to the services, determines the sequence and timing of execution of the service invocations. SOAD provides an excellent solution to these issues. As it groups services on the basis of related behavior, rather than encapsulated (behavior plus data), the set of services will be subtly different from a business object model. The order is created when stock amount is less than Stock Keeping Unit (SKU) calculated by the legacy system of hospital. For each pharmaceutical, a separate
March 15, 2010
16
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
order item is created, containing details of order quantities and the rules defined in the contract. As the supplier manages the hospital’s stock, he’s ready to provide the necessary amount of this pharmaceutical. The main problem is the delivery lead time. A suitable shipping way needs to be scheduled for each pharmaceutical. Each dispatch may contain one/several pharmaceutical. It must be determined which pharmaceutical has urgency, or which one may be dispatched with the others. When the items come to the hospital, the pharmacist and the employee must verify the boxes and approve the task waiting on the system to declare that the order is correct. If it is not correct, they must specify the details, and a new job will start for the mistakes. When they approve that the order is correct, the supplier’s legacy system will produce the invoice and send the invoice via WSs to hospital to get the payment. The second part of supply chain is the receipt of the payment of the necessary amount of the consumed products either from the insurance companies or the government. In this way, it is expected that the invoices of the products that have been used for the favor of the patients should be sent to the Ministry of Health and the payment should be done against these invoices. The payment part includes the payment that will be done to the suppliers from the hospitals and to the hospitals from either the government or the insurance companies upon the control and the approval of the invoices. These transactions are structured again on the WSs and the orchestration among them. The main point here, as we mentioned before, is to provide the supply of the expensive products that have been decided to be used during the operation while their usage number and their payment conditions are still unknown. The system established may send the necessary information to the information systems of both suppliers and of the Ministry of Health or the related insurance company once the date and venue of the operation, and the type and the estimated amount of the products that will be used during the operation are decided. The telemedicine support that we explained earlier, steps in here. To enable the doctors to communicate easily and to access the system from anywhere independently from their daily computers, a “telemedicine” module has been designed by using the Adobe Flash technology, which is very common nowadays. With the help of this module, the end users may communicate with each other either interactively or in the way of one-sided video conference. To make all the correspondence possible to be watched again, the file extension “.flv” has been selected. These files are stored under the folders that have been named by the variables that belong to the system and that determine the owners. There is also another separated database where all the data related to these files are stored. This method has been selected to facilitate the management and to diminish the load into the database. The Red5 Open Source Flash Server that uses the Real Time Messaging Protocol (RTMP) protocol that provides the simultaneous exchange of information has been selected as server. On the other hand, the Flash Media Server can also be used as an alternative to this server. In this way, the operation may be watched both in real time or later on in the desired period. This also provides
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
17
the opportunity of making a decision on the necessity of the products used during the operation not only from the epicrisis, but also from watching the actual operation. This is certainly an important step to decide unbiasedly without any external influence. Processes are modeled according to SOA and we obtained the services cited below in the figure: 1. Supplier service a. Lookup the supplier by contract b. Create the new supplier c. Get the supplier information 2. Inventory service (legacy system) a. Determine the quantity on hand of item b. Comparing with SKU level c. Determine the order quantity d. Expected arrival date e. Inventory management i. Physical review ii. Closing 3. Order service a. Create the order b. Schedule the order date and time c. Get the offerings d. Delivery order e. Receive the delivery 4. Scheduling service a. Take the delivery schedule b. Schedule the delivery date and time 5. Payment service a. Hospital-Supplier b. Government/IC-Hospital 6. Telemedicine service a. Approval of medical supply usage and its quantity b. Rejection of medical supply usage and its quantity 7. Utilization service a. Create the new utilization by departments b. Status tracking of delivery request c. Decrease the stock quantity d. Increase the usage amount e. Markup the patient’s file
March 15, 2010
18
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
Services are modeled in Fig. 2:
Figure 2.
Services.
4. System Implementation While modeling and implementing the system, we used IBM DB2 Express-C, IBM Websphere Application Server 6.1, IBM Websphere Business Modeler 6.1, IBM Rational Software Architect 6.1, IBM Websphere Integration Developer 6.1, IBM Websphere Process Server 6.1, and IBM Websphere Monitor 6.1. We have used Websphere 6.1 Feature Pack for WSs, which has been installed on our application server, namely, Websphere Application Server 6.1. We decided to select this software because it allows us to communicate with other vendors in a more reliable, asynchronous, secure, and interoperable way. It also enables support for the Java API for XML Web Services (JAX-WS) 2.0 programming model and the SOAP 1.2 that may remove most of the ambiguities that existed in the previous versions of SOAP. As it may be well known, JAX-WS 2.0 is a Java programming model to create WSs. Its most important feature is that it provides an asynchronous client model. This makes easier to develop and deploy WSs. Another important feature of the JAX-WS can be cited as supporting WS-I Basic Profile 1.1. The WSs that have been developed by using the JAX-WS can be consumed by any client which has been previously developed by any programming language supporting this basic profile.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
19
XML Binding for Java (JAXB) enables data mapping between XML Schema and JAVA. XML is contained in the SOAP message, and without knowing how to parse SOAP and XML messages, JAXB defines this binding for us. On the other hand, SOAP with Attachments for Java (SAAJ) enables dealing with XML attachments in SOAP messages. Figure 3 illustrates which product is used in which step. Before coding the WSs, we have first modeled the BPM with Websphere Business Modeler 6.1. The process flow is shown in Appendix A. Then we define the services one by one as illustrated in Fig. 2. For the service orchestration, SOA needs a middleware which is generally Enterprise Service Bus (ESB). In our project, we prefer to use WebSphere Business Process Execution Language (WS-BPEL) as service orchestration tool. WS-BPEL is an orchestration language. An orchestration mentions executable process which means to intercommunicate with other systems dynamically and the control of this process is done by an orchestration designer such as WebSphere Process Server. This language combines two notions: 1. BPM and 2. Web Services (WSs)
Figure 3.
Software used.
March 15, 2010
20
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
The most crucial step in today’s business world is to manage and improve their business processes while workflow components must be loosely coupled and run interoperable. In this point, Web Services are in action. Web Services are self-contained, modular business blocks that achieve interoperability between applications. They use Web standards such as WSDL, UDDI, and SOAP. In real life scenarios, we need to join different WSs and to use them as a whole entity of WSs. This is where, we need a new language that encapsulates, all WSs needed and exposes the business process as a unique WS: the WS-BPEL. In WS-BPEL technique, the business process is defined as follows: “A business process specifies the potential execution order of operations from a collection of WSs, the data shared between these WSs, which partners are involved and how they are involved in the business process, joint exception handling for collections of WSs, and other issues involving how multiple services and organizations participate.” (Sanjiva and Francisco, 2002). WS-BPEL extends the WSs interaction model and enables it to support business transactions. WS-BPEL uses WSDL to specify an interface between the business process and the world outside by describing actions in a business process and the WSs provided by a business process. The business process itself and its partners (services with which a business process interacts) are modeled as WSDL services. So, WS-BPEL has a role to compose existing services. The business process is described as a collection of WSDL portTypes, just like any other WS as illustrated in Fig. 4. WS-BPEL is the top layer which uses a middleware layer WSDL using SOAP, a protocol to exchange structured information. UDDI is on top of SOAP, which is a registry of all publishing WSs. We prefer to model the orchestration of WSs with WS-BPEL because WS-BPEL can handle complex cases. For example, it is usable with loops and scopes in a business process logic, which is not supported by an ESB. We need long-running business processes where we need to maintain the state information of the process, which is also not supported by an ESB.
Figure 4. WS defined by WS-BPEL.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
21
With WS-BPEL, we are able to orchestrate a business process where we use WebSphere Process Server as the business process choreographer. One of the major features of WPS is that it supports human tasks, which enable human activities to be invoked as services. And human activities have an important role in healthcare industry. Our requirements are process-centric so WS-BPEL is the better choice in this project. In our scenario, each pharmaceutical is provided from a specific medical supplier in a specific quantity. All this information is kept in database tables. In the SUPPLIER table, there is a field named “Endpoint Address,” there is information about the medical supplier’s end-point address. So when a pharmaceutical quantity is not sufficient in the stock, a trigger is fired and a WS request is sent to WS provider, the medical supplier. This is where we are a consumer. There are WSs that we offer to our partner medical suppliers. They can control our stock to learn the quantity of pharmaceuticals that we have, and to hold ready their own stock. This is where we are a provider. In the big picture, we are a consumer and also a provider, this is where SOA begins. 5. Conclusion When analyzed, it has been observed that the system that we try to establish has many benefits. First, thanks to the VMI application, the workloads of the pharmacists and the other employees have been decreased. In Turkey, there is no obligation of employing a pharmacist for the hospitals that have less than 100 beds. Taking this account, this system by the unification of the different applications of the telemedicine may also help the pharmacist to support more than one hospital. This situation will be the subject of another study. On the other hand, thanks to this application, the pharmacist is no longer expected to be knowledgeable about the stock management or the cost reduction but on the contrary these operations will be realized by the real experts. Moreover, there are many benefits in the stock management as well. The drugs are difficult materials to store and preserve. In fact, each drug has an expiry date. There is an enormous number of different drugs and most of the drugs may be used equivalently instead of each other. Besides, as we mentioned before, in the healthcare sector, the nonexistence may produce more serious results as the human life is concerned. Therefore, an effective stock management that has been structured in this way, and an efficient material handling at the same time, will certainly improve the quality of the service offered to the patients in the hospital. Second, the information integration and a successful supply chain will eventually result in a strong integration among the partners of the system, i.e., the government, hospitals, and wholesalers. Thanks to the services produced, all the required information may be used by the other institutions within the limit of the permission given by the service provider institution. The only constraint here is that each provider and the consumer must work with the same semantic to understand the WSs. Furthermore, the applications liable to human error, such as lost papers among the files, the products that are forgotten to be ordered, and numerous phone calls, will be totally removed. When the specific importance of the sector studied is taken
March 15, 2010
22
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
into account, it is extremely important to minimize the deadlocks arising from the human mistakes. The system established provides also a serious profit both in making the orders and in the stock management. We still do not have feedback on the results of the government/insurance company — hospital integration that has been recommended especially for consumables, as this system has not been yet implemented when this chapter is written. However, it is obvious that the system is extremely interesting. A pharmaceutical including the special purposed ones, SCM system to optimize the supply chain has been modeled and developed in this research. Although the whole supply chain is composed of raw material suppliers, pharmaceuticals companies, wholesalers, hospitals, and patients, we focused especially on implementing a new model, non-traditional VMI system in hospital warehouse by sharing electronic data via WSs between hospitals and wholesalers. We cannot deny that there is a lot of effort still required to improve the efficiency of the total supply chain with regard to manufacturers, the government, and insurance companies. Also, hospitals must be willing to adopt this system, have total trust in their wholesalers, and share their inventory information with them. To extend the benefits presented in this chapter, standards for exchanging information electronically must be established and adopted. This is where semantics of WSs begins and occupies a big place in exchanging the data. References Arbietmann, D, E Lirov, R Lirov and Y Lirov (2001). E-commerce for healthcare supply procurement. Journal of Healthcare Information Management, 15(1), 61–72. Arsanjani, A (2004). Service-oriented modeling and architecture: How to identify, specify, and realize services for your SOA. IBM Whitepaper, November 2004, http://www. ibm.com/developerworks/library/ws-soa-design1 Cingil, I and A Dogac (2001). An architecture for supply chain integration and automation on the internet. Distributed and Parallel Databases, 10, 59–102. EHCR Committee (2001). Improving Supply Chain Management for Better Healthcare http://www.eccc.org/ehcr/ehcr/ Erl, T (2005). Service-Oriented Architecture, A Field Guide to Integrating XML and Web Services. Prentice Hall, NJ, USA. Josuttis, NM (2007). SOA in Practice, The Art of Distributed System Design. e-book, O’Reilly Media Inc., USA. Jung, S, T-W Chang, E Sim and J Park (2005). Vendor managed inventory and its effect in the supply chain, AsiaSim 2004, LNAI 3398, 545–552. Kaipia, R, J Holmstr¨om and K Tanskanen (2002). VMI, What are you loosing if you let your customer place orders? Journal of Production Planning & Control, 13(1), 17–25. Lee, HL and S Whang (2000). Information sharing in a supply chain. International Journal of Manufacturing Technology and Management, 1(1), 79–93. Leymann, F, D Roller and M-T Schmidt (2002). Web services and business process management. IBM Systems Journal, 41(2), 198–211. Mantzana, V, M Themistocleous, Z Irani and V Morabito (2007). Identifying healthcare actors involved in the adoption of information systems. European Journal of Information Systems, 16, 91–102.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
23
Omar, WM and A Taleb-Bendiab (2006). Service-oriented architecture and computing. IT Professional, 35–41. Papazoglou, MP and B Kratz (2007). Web services technology in support of business transactions, Service Oriented Computing and Applications, 1(1), 51–63. Papazoglou, MP, P Traverso, S Dustdor and F Leymann (2007). Service-oriented computing: State of the art and research challenges. Journal of Computer, IEEE Computer Society, June, 38–45. Polatoglu, VN (2006). Nazar foods company: Business process redesign under supply chain management context. Journal of Cases on Information Technology, 2–14. Siau, K (2003). Health care informatics. IEEE Transactions on Information Technology in Biomedicine, 7(1), 1–7. Simchi-Levi, D, P Kaminsky and E Simchi-Levi (2008). Designing and Managing The Supply Chain, Concepts, Strategies, and Case Studies, 3rd Edn., Mc Graw-Hill Irvin. Sprott, S and L Wikes (2004). Understanding service-oriented architecture, CBDI Forum. The Architectural Journal, 1, 2. Weerawarana, S and F Curbera (2002). Business process with BPEL4WS: Understanding BPEL4WS, Part 1, concepts in business processes. IBM Whitepaper, August 2002. http://www.ibm.com/developerworks/webservices/library/ws-bpelcol1 Yao, Y and M Dresner (2008). The inventory value of information sharing, continuous replenishment, and vendor-managed inventory. Transportation Research Part E, 44, 361–378. Zimmerman, O, P Krogdahl and and C Ghee (2004). Elements of service-oriented analysis and design, an interdisciplinary modeling approach for SOA projects. IBM Whitepaper, June 2004 http://www.ibm.com/developerworks/webservices/library/ws-soad1
Biographical Notes Sultan N. Turhan received her ASc degree in computer programming in 1992 from Boˇgazi¸ci University, her BA degree in business administration in 1998 from Marmara University, and her MSc degree in computational science and engineering in 2003 from Istanbul Technical University. She is currently a PhD student of Engineering Management in Marmara University and her research subject is “industrial applications of datawarehouses.” Between 1992 and 1998, she worked as database administrator, IT project coordinator, and IT manager responsible in different institutions. Since 1998, she has been working as senior lecturer in Computer Engineering Department of Galatasaray University. Since 2002, she has also been working for Intelitek — Element A.S as an academic consultant for distance learning and e-learning platforms. Her professional interests are synchronized distance learning, e-learning, databases, datawarehouses, data mining, and knowledge management. ¨ Ozalp Vayvay, PhD, is working in Industrial Engineering Department at Marmara University. He is currently the Chairman of the Engineering Management Department at Marmara University. His current research interests include new product design, technology management, business process reengineering, total quality management, operations management and supply chain management. Dr. Vayvay has been involved in R&D projects and education programs over the past 10 years.
14:44
WSPC/Trim Size: 9.75in x 6.5in
¨ Vayvay S. N. Turhan and O.
SPI-b778
b778-ch01
Workflow diagram
March 15, 2010
24
Appendix A
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
Chapter 2
The Role of the CIO in the Development of Interoperable Information Systems in Healthcare Organizations ´ ˜ †, ANTONIO GRILO∗,$ , LU´IS VELEZ LAPAO ‡ RICARDO JARDIM-GONCALVES and VIRGILIO CRUZ-MACHADO∗,¶ ∗ UNIDEMI, Faculdade de Ciˆ encias e Tecnologia da Universidade Nova de Lisboa, Portugal † INOV — INESC Inova¸ ca˜ o, Lisboa and CINTESIS, Faculdade de Medicina da Universidade do Porto, Porto, Portugal ‡ UNINOVA, Faculdade de Ciˆ encias e Tecnologia da Universidade Nova de Lisboa, Portugal $
[email protected]; †
[email protected] ‡
[email protected]; ¶
[email protected]
A major challenge for business information systems (BIS) design and management within healthcare units is the need to accommodate the complexity and changeability in terms of their clinical protocols, technology, business and administrative processes. Interoperability is the response to these demands, but there are many ways to achieve an “interoperable” information system. In this chapter we address the requirements of a healthcare interoperability framework to enhance enterprise architecture interoperability of healthcare organizations, while maintaining the organization’s technical and operational environments, and installed technology. The healthcare interoperability framework is grounded on the combination of model driven architecture (MDA) and service-oriented architecture (SOA) technologies. The main argument in this chapter is to advocate the role of chief information officer (CIO) in dealing with challenges posed by the need to achieve BIS grounded on interoperability at the different layers, and in developing and executing an interoperability strategy, which must be aligned with the health organization’s business, administrative, and clinical processes. A case study of the Hospital de S˜ao Sebasti˜ao is described demonstrating the critical role of the CIO for the development of an interoperable platform. Keywords: Interoperability; healthcare; chief information officer; complexity; model driven architecture (MDA); service-oriented architecture (SOA); healthcare interoperability framework.
25
March 15, 2010
26
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
1. Overview The healthcare sector is characterized by complexity and rapid developments in terms of clinical protocols, technology, business and administrative processes. This poses a major challenge for the BIS design and management within healthcare units, as there is a need for ICT infrastructure to accommodate the changes while simultaneously responding to the pressure to have integrated information flows. Interoperability is the response to these demands, but there are many ways to achieve an “interoperable” information system. In Sec. 2 of this chapter we describe the existing general approaches for developing information systems that are able to cope with interoperability requirements, and review the main concepts of the model-driven architecture (MDA) and serviceoriented architecture (SOA) approaches. In Sec. 3, we point out the importance of information systems (IS) in the healthcare sector, the IS functions on healthcare units, the case for the need for interoperability and finally, a generic healthcare interoperability framework that combines MDA and SOA. The main argument of this chapter of the book is laid out in Sec. 4, where we advocate the critical role of the CIO in developing and implementing adequate but complex information systems strategies, supported by a flexible and robust healthcare interoperability framework that leads to integration/interoperable platforms. The argument advanced is grounded on the increasing business, cross-functional, and leadership skills that are required to convince decision makers to produce the investment (resources, time, “patience”) needed for the deployment of interoperable ICT infrastructures. Section 5 describes the case study of Hospital S˜ao Sebasti˜ao (HSS), an innovative and vanguard Portuguese hospital that has implemented an interoperable platform for internal purposes, as the information systems strategy was grounded on having “best-of-breed” for each functional business, administrative and clinical applications. The case illustrates how the CIO of the HSS has developed and implemented the interoperable platform grounded on the healthcare interoperable framework (HIF) and the importance of his different skills in order to achieve success in the endeavor. Finally, Sec. 6 concludes with a look at the increasing relevance that the CIO will have in healthcare units as the dynamics of healthcare continues to accelerate and evolve. Moreover, it addresses, as future challenge, the need to understand and model the role of the CIO using complex systems theory. 2. Business Information Management Systems and Interoperability Today, many proposals are available to represent data models and services for the main business and manufacturing activities, and the same holds true for the health sector. Some are released with International Standards (e.g., ISO, UN), others are developed at the regional or national level (e.g., CEN, DIN), and others are developed by independent project teams and groups (e.g., OMG, W3C, IAI, ebXML).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 27
Most of the available healthcare standard-based models have been developed in close contact with the health service industry, including the requirements of public and private organizations, following an established methodology. They use optimized software architectures, conferring configurable mechanisms focused on the concepts of extensibility and easy reuse. However, in the foreseen and desired BIS scenario of collaboration and flexibility, the heterogeneity and inadequacy to support interoperability of the selected and needed objects is delaying and even preventing its full integration, even when the objects are standard-based. Studies have shown that interoperability within and across public and private health organizations is far from being a reality, particularly in health public sector organizations (Kuhn et al., 2007). For the adoption of standardized models, processes, and services to be considered appropriate, applications must be prepared with suitable mechanisms and standardized interfaces easily adaptable for fast and reliable plug-and-play. Hence, the use of effective and de facto standards to represent data, knowledge, and services has shown to be fundamental in helping interoperability between systems. Thus, today this poses a major challenge to those responsible for designing and implementing BIS for health organizations. 2.1. Model-Driven Architecture The object management group (OMG) has been proposing the model-driven architecture (MDA) as a reference for achieving wide interoperability of enterprise models and software applications. MDA provides specifications for an open architecture appropriate for the integration of systems at different levels of abstraction and through the entire information systems life cycle (Mellor, 2004; Miller and Mukerji, 2001). Thus, this architecture is designed to incite interoperability of the information models independently of the framework in use (i.e., operating system, modeling and programming language, data servers, and repositories). The MDA comprises three main layers (Mellor, 2004; MDA, 2006). The CIM (computation-independent model) is the top layer and represents the most abstract model of the system, describing its domain. A CIM is a stakeholders-oriented representation of a system from the computation-independent viewpoint. A CIM focuses on the business and manufacturing environment in which a system will be used, abstracting from the technical details of the structure of the implementation system. The middle layer is the PIM (platform-independent model), and defines the conceptual model based on visual diagrams, use-case diagrams, and metadata. To do so it uses the standards UML (unified modeling language), OCL (object constraint language), XMI (XML metadata interchange), MOF (meta-object facility), and CWM (common warehouse metamodel). Thus, the PIM defines an application protocol in its full scope of functionality, without platform dependencies and constraints. For an unambiguous and complete definition, the formal description of the
March 15, 2010
28
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
PIM should use the correct business vocabulary, and choose the proper use-cases and interface specifications. The PSM (platform-specific model) is the bottom layer of the MDA. It differs from the PIM as it targets a specific implementation platform. Therefore the implementation method of the MDA, also known as model-driven development (MDD), is achieved through a transformation that converts the PIM to the PSM. This procedure can be followed through automatic code-generation for most of the system’s backbone platforms, considering middleware-specific constraints, e.g., CORBA, .NET, J2EE, Web Services. The research community is also developing and validating other proposals, including those known as executable UML. With it, the abstract models described in UML are implemented and tested at a conceptual level, i.e., PIM, before transforming them to be implemented in the targeted platform (Mellor, 2004). 2.2. The Service-Oriented Architecture The World Wide Web Consortium (W3C) refers to the service-oriented architecture (SOA) as “a set of components which can be invoked, and whose interface descriptions can be published and discovered” (W3C, 2006). Also, and according to Microsoft, the goal for SOA is a world-wide mesh of collaborating services that are published and available for invocation on a service bus (SOA, 2006). SOA does not consider the service architecture from a technological perspective alone, but also proposes a normalized service-oriented environment (SOE) offering services’description, registration, publication, and search functionalities (Figure 1). Placing its emphasis on interoperability, SOA combines the capacity to invoke remote objects and functions, i.e., the services, with standardized mechanisms for active and universal service discovery and execution. The service-oriented architecture offers mechanisms of flexibility and interoperability that allow different technologies to be integrated with great effectiveness, independent of the system platform in use. This architecture promotes reusability, and it has reduced the time to make available and gain access to a new system’s functionalities, allowing enterprises to dynamically publish, discover, and aggregate a Service Broker
Search Find
Service Consumer Figure 1.
Bind/ Interact Client
Publish Register
Service description Service Service Provider
Service-oriented environment based on SOA.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 29
range of web services through the Internet. Thus, SOA encourages organizations to focus on their business and services, free of the constraints of the applications and platforms. This is an essential feature for organizations to achieve information technology independence, business flexibility, agile partnership, and seamless integration into collaborative working environments and digital ecosystems. Well known organizations include Microsoft’s DCOM, IBM’s DSOM protocol, and the OMG’s Object Request Brokers (ORBs) based on the CORBA specification. Today, the use of W3C’s web services is expanding rapidly as the need for application-to-application communication and interoperability grows. These can implement a business process integrating services developed internally and externally to the company, providing a standard means of communication among different software applications running on a variety of heterogeneous platforms through the Internet. Web services are implemented in XML (eXtended Markup Language). The network services are described using the WSDL (Web Services Description Language), and the SOAP (Simple Object Access Protocol) is the communication protocol adopted. The registration of the services is in the UDDI registry (Universal Description, Discovery and Integration). Although providing a significant contribution, the SOA alone is not yet the answer to achieve seamless interoperability between applications. For example, despite the efforts undertaken to ensure compatibility between all the SOAP implementations, there still exists no unique standard. The Web Services Interoperability Organization, WS-I, is a good example of an organization supporting Web services interoperability across platforms, operating systems, and programming languages. WS-I has been developing mechanisms for the convergence and support of generic protocols in the interoperable exchange of messages between web services (WS-I, 2006).
3. Interoperability in the Healthcare Context Healthcare services have been pushed toward change mostly due to the awareness of the increasing costs and large inefficiencies in the system. It is now accepted that healthcare is one of the most complex businesses, with a large diversity of types of interactions (Plsek and Wilson, 2001; Lap˜ao, 2007a). Using IT to support the services delivery may overcome part of the complexity. New IT solutions for sharing and interacting indeed represent an opportunity to further enhance citizens’ participation in the healthcare process, which could improve the services’ outcomes. Smith (1997) and IOM (2001) have proposed that only IT can bridge this chasm. IT represents a hope to help process change to occur and to create new sorts of services focused on satisfying citizens’ real needs. It is therefore necessary to develop communication strategies toward citizens properly aligned with the healthcare network services delivery strategy.
March 15, 2010
30
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
Interoperability of healthcare systems can play a critical role in this process of communication. Several e-Health initiatives and projects have been developed in recent years, some more successful than others, but in general, the Information and Communication Technologies (ICT) used were not integrated into a broader egovernment strategic plan. There is the awareness that these projects also represent an opportunity to learn and improve the way we are using ICT in healthcare. Today’s healthcare technologies allow easy and fast detection of tumors, probing of tiny catheter into the heart to clean arteries, destruction of kidney calcifications without touching the skin. Nevertheless, simple things, such as distributing the right medicines at the right place or making sure the doctor’s appointment takes place at the right time, can still represent enormous challenges (Lap˜ao, 2007b; Mango and Shapiro, 2001). Progress has been slow. Healy (2000) considers that healthcare ICT Systems are simply following a logistic curve, evolving like other industries, only lagging behind. Some examples of how ICT can be combined to offer high-valued services to end-users, either domain experts or patients are as follows: • • • • • • • •
Electronic Health Records Tele-monitoring of real-time data Alert Systems Pattern recognition in historical data Signal and image processing Inferences on patients’ data using the knowledge base Knowledge base editing Support to knowledge discovery analysis.
The report “To Err is Human” (IOM, 1999) launched the debate about the importance of using ICT in healthcare to avoid many human errors whereas interoperability rules’ utilization can provide additional pressure to help the proper use of technology in that regard. One should not forget, however, that the introduction of ICT in a healthcare environment requires carefully addressing the information data and management models and the integration of the organization information infrastructure (Lenz and Kuhn, 2002). Two years after “To Err is Human,” the report “Crossing the Quality Chasm: A New Health System for the 21st Century” (IOM, 2001) identified weaknesses in the design and safety of healthcare ICT which are mostly related with the lack of pressure to solve those issues, due to a lack of perception and the respective claim from the citizen. Introducing an interoperability framework, will bring up all the good things and create pressure (and a framework) toward dealing with the mistakes and errors. There is evidence that the citizen is more and more aware of the impact of these “mistakes and errors.” This growing awareness will potentially affect the healthcare business and this will become so evident that healthcare organizations will need to invest in order to avoid them (Lap˜ao et al., 2007).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 31
The development of integrated and interoperable information systems in healthcare is an essential requirement in the modernization of health (and welfare) systems. Today, there is some evidence that most hospitals and health centers, which for historical reasons have the support of a set of IS “islands,” show clear signs of inefficiency, lack of interoperability between existing systems, and weak IS integration with processes. As in any other economic sector, within healthcare, there are two main types of interoperability that can be identified, the technical and the semantics. Both require wide organizational agreement on standards. The first should take into consideration mostly the industry interoperability standards, and according to our interoperability approach, are dealt with by the PIM and PSM; the second should focus on proper healthcare business data models and processes, i.e., the CIM. Both are huge tasks to be accomplished. Both need people in the organization to deal with the tasks. There is an increasing number of activities seeking to address and measure interoperability. Organizations such as HITSP (Healthcare Information Technology Standards Panel) (in the USA) and CEN (in Europe) are defining standards that will be the support structure for interoperability. Specialized groups such as IHE are pushing the debate and developing interoperability profiles to tighten areas of ambiguity en route to stronger interoperability. The HL7’s Electronic Health Record (EHR) group has produced many reports and other materials to guide technology managers through the myriad of infrastructures in the transformation toward interoperability. There are a multitude of perspectives that must be considered regarding interoperability in the context of healthcare. 3.1. Semantics Semantics is a truly fundamental issue. Healthcare is a set of multidisciplinary fields that deals with the health and diseases of the human body. Because of the multiplicity of fields and different perspectives of the human organism, professionals need to share a common semantic framework in order to be able to work together. Only at this point we are able to meaningfully exchange and share business-pertinent healthcare information. The challenge is to maintain sufficient information richness and sufficient context for the information to be meaningful and useful to the consumer. This means that the information systems must be able to cope with refinement and evolution. 3.2. Computational Mechanism Even if we have the richest healthcare business semantics and are exchanging information on paper, we have not yet achieved the potential of EHRs and interoperability. While a purist view requires only information exchange, the format matters. Anyone who has traveled and forgotten a power adapter appreciates the
March 15, 2010
32
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
difference. How things connect is important, and the better the infrastructure we put into place to allow systems to interoperate, the more flexible our organizations will be in adapting to changing needs and technologies. 3.3. Healthcare Business and Context Maintaining contextual relevance is important to interoperability. Receiving information items such as “systolic and diastolic values” does not convey enough information for that data to be useful. The same is true in the business context. “Patient self-entered” information may be less reliable than that entered by a healthcare professional, or perhaps not. For example, what if the issue is a healthcare family history, fundamental to the physician’s work and performance. Understanding metadata and contextual information has relevance as we seek to build interoperability bridges across organizations and enterprises. Conformance is required to guarantee that systems properly address the business process issues. The Healthcare Services Specification Project (HSSP) already includes the notion of conformance profiles, which comprise many of the abovementioned points. By building up a conformance verification that address the semantics and functions (e.g., the computational mechanism), we are taking a more comprehensive view of interoperability (HSSP, 2007). If we consider the idea of an implementation context, many of these issues come into focus. Including this concept into our perspective of conformance propels this notion forward. We can elect to have an implementation context bringing together the business perspective, policies, and relevant environmental context in play within an organization. We can do the same for a network (e.g., RHIO), national or international context (Kuhn et al., 2007). Finally, not everything needs to be standardized and interoperable. As long as we can precisely describe what is and what is not interoperable, we have the freedom to extend specifications to include what we need and still be useful. 3.4. The Healthcare Interoperability Framework (HIF) Most of the standards contain a framework including a language for data model description, a set of application reference models, libraries of resources, mechanisms for the data access and representation in neutral format. However, its architecture is typically complex. Especially due to its extent, to understand and dominate a standard completely is a long and arduous task (Bohms, 2001; Dataform 1997). This fact has been observed as one of the main obstacles for the adoption of standard models by the software developers. Even when they are aware of a standard that fits the scope of what they are looking for, quite often they prefer not to adopt it, and instead, create a new framework of their own (aecXML, 2006; CEN/ISSS, 2006; Berre, 2002).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 33
Generally, the standard data access interfaces are described at a very low level. Moreover, they are made available with all the complexity of the standard’s architecture to be managed and controlled by the user. This circumstance requires a significant effort from the implementers to use it, and is a source of systematic errors of implementation, for instance when there are functionalities for data access very similar, but with slight differences in attributes, names, or data types (ENV 13550, 2006; Pugh, 1997; Vlosky, 1998). To avoid the explosion in the number of required translators to cover all the existing standard data models, an extension of this methodology proposes the use of standard meta-model descriptions, i.e., the meta-model, using a standard meta-language, and letting the generators work with this meta-model information (Jardim-Goncalves and Steiger-Garc˜ao, 2002; Umar, 1999). With this methodology, changing one of the adopted standards for data exchange does not imply an update of the interface with the application using it, where only the low level library linked with the generated code needs to be substituted. If the platform stores a repository with several implementations of standard data access interfaces, the implementer can choose the one desired for the specific case, e.g., through a decision support multiplexing mechanism. In this case, the change for the new interface will be exercised automatically and the access to the new standard will be immediate. A proposal to address this situation considers the integration of SOA and MDA to provide a platform-independent model (PIM) describing the business requirements and representing the functionality of their services. These independent service models can then be used as the source for the generation of platform-specific models (PSM), depending on the web services executing platform. Within this scenario, the specifications of the execution platform will be an input for the development of the transformation between the MDA’s PIM and the targeted web services platform. With tools providing the automatic transformation between the independent description of the web services and the specific targeted platform, the solution for this problem could be made automatic and global. The appearance of HL7 completely changed the way interoperability was seen in healthcare. HL7 v2 was and remains wildly successful at allowing organizations to interchange information and interoperate, although some problems have emerged with the ambiguity in its “Z” segments, leading to some criticism. We have an opportunity now to benefit from the lessons learned from the past (successes and mistakes), taking the utility and flexibility offered by HL7 so as to give confidence that things will interoperate within an interoperability profile. However, HL7 v2 provides only a solution to the whole healthcare interoperability challenge. As depicted in Fig. 2, HL7 is a base for the PIM of healthcare. In order to have a fully interoperable environment, we will need to develop a CIM — i.e., the Healthcare Sector Model; configure a PIM — i.e., the healthcare clinical processes and procedures and business models, that are delivered by Services, grounded on HL7; and finally, by using many of the existing standards and technology, set up PSM.
March 15, 2010
34
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
Healthcare Business Sector Model
CIM
PIM
Electronic Health Records
HL7 SDAI
PSM
Web Services
TeleMonitoring of Real Data
…
Patient Management
Web Services HL7 SDAI
…
Figure 2.
CORBA
Web Services
CORBA
XSL
Healthcare interoperability framework.
The deployment of the Healthcare Interoperability Framework requires the development of an Integration Platform (IP), which is characterized by the set of methods and mechanisms capable of supporting and assisting in the tasks for integration of applications. When the data models and toolkits working for this IP are standard-based, they would be called Standard-based Integration Platforms (Boissier, 1995; Nagi, 1997). The architecture of an IP can be described through several layers, and proposes using an onion layer model (Jardim-Goncalves et al., 2006). Each layer is devoted to a specific task, and intends to bring the interface with the IP from a low to a high level of abstraction and functionality. The main goal of this architecture is to facilitate the integration task, providing different levels of access to the platform and consequently to the data, covering several of the identified requirements necessary for integration of the applications (Jardim-Goncalves and Steiger-Garc˜ao, 2002). Another challenge is to build up management and ICT teams that would address the interoperability issues in accordance with the healthcare business perspective. The organizational side of the interoperability has been mostly forgotten (Ash et al., 2003; Lorenzi et al., 1997). Technology is more embellished and organizational issues are often rather obscure. Organizations tend to hide many important aspects, except those that show the organization in a good light. This kind of approach does not help when an organization is looking to improve and address difficult issues such as helping everyone work together in a proper way.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 35
We must also consider that considerable sums of public money have been invested in the development of electronic healthcare systems. The lack of overall coordination of these initiatives presents a major risk in achieving the goal of integrated healthcare records, which in turn will restrain the modernization program of healthcare services.
4. CIO Leadership Driving Interoperability in a Healthcare Environment For many years healthcare has been considered to be a different industry, away from cost control and dominated by the physician. Management and engineering were also not taken seriously (Lap˜ao, 2007a; Mango and Shapiro, 2001). At the beginning of the past century, the Mayo Clinic showed the way by introducing management and engineering best practices. At that time they had the leadership to open the way. Surprisingly, only in the last years are we seeing its diffusion. For many years healthcare managers simply had to obey and follow the physician commands and did not feel the pressure to optimize healthcare processes. Healthcare was supposed to serve the patients’ needs, whatever they were even if it seemed too expensive. At the same time, there were no existence of management information systems. Today there is evidence that information systems (IS) can be an important driver for healthcare efficiency and effectiveness. But in order to take advantage of IS, leadership is necessary for promoting the alignment of business with IS. One common barrier is the inadequacy of management tools and models to address healthcare intrinsic complexity (Lap˜ao, 2007a). When the theory of complexity is applied to strategies for implementing BIS, some interesting answers will be reached, as complex organizations and managers need to cope with complexity accordingly (Kauffman, 1998; Plsek and Wilson, 2001). Complexity in an organization can be understood as the ability of a group of interacting agents to auto-organize themselves, while obeying only a set of simple rules. For instance, as far as the healthcare government policies are concerned, the existence of an adequate regulatory body is required, following the international standards, the “simple rules” of action. Furthermore, at the operational level, it means that highly qualified professionals are needed to act as “agents.” In this environment the role of the CIO is critical to ensure good focus on the organization specificities and in the implementation of the IS (Lap˜ao, 2007b). In order to reach interoperability, more sophisticated IS teams are needed for dealing with the challenges and for breaking out of the “vicious-circle” (Fig. 3). Most CIOs today are very much focused on procuring, building, and maintaining the IT systems on which the healthcare business operates. Unfortunately, most CIOs do not have a seat at the decision making table. This situation is incomprehensible since technology assets are strategic in healthcare today. In order to
March 15, 2010
14:44
36
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al. Lack of Skills in health care Information systems
Lack of Competition
Strategic Instability
Lack of Regulation
(Frequent changes in Hospital Boards)
Figure 3. Vicious-circle disturbing HIS development.
deliver value, CIOs must clearly understand the dynamics of the healthcare environment and be in a position to provide valuable input to the CEO and the board as a part of the management and planning process. CIOs who manage to leverage the uncertain market and technology opportunity will be able to provide more help to improve physicians’ and nurses’ productivity. This translates into value, which is most evident in three areas. 4.1. Decision Support Organizational strategic decisions are always very complex and time dependent. They involve many unknowns and many players acting at the same time. The CEO’s ability to set a course of action (vision) largely depends on the availability and interpretation of information, analysis, intuition, emotion, political awareness, and many other factors. The CIO who should have the healthcare system knowledge, including the competitive landscape, must be able to optimize the effective delivery of systems that provide much of the input the CEO and other senior management team members ultimately use in their daily functions. 4.2. Organizational (Value Chain) Visibility A proactive CIO is in a unique position in that her/his role is perhaps the only one within the organization that has significant visibility across most, if not all, functions within the organization. Finance and accounting, clinical services, maintenance and logistics, patient management, training and R&D, administration and other functions all have input (planning) and output (execution) relationship to strategy. The CIO should leverage the level of visibility into these functions, and in particular the regular interface with suppliers and patients (e.g., through the healthcare organization website) can help provide better-fit insights into opportunities to build competitive advantage for an organization.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 37
4.3. Corporate Governance In these complex times, organizational governance must be flexible and open to change. The CIO is the key person to play an ever-growing role in ensuring corporate compliancy and managing risk. Failure to comply with governmental regulations such as HIPAA could put a healthcare organization at significant financial risk. For instance, failure to comply with Sarbanes-Oxley mandates could land a CEO in jail. It is a strategic imperative that organizations understand the regulatory landscape and be able to ensure compliancy, otherwise it risks facing issues that could decidedly put it at a competitive disadvantage. For healthcare organizations that see the CIO’s role as merely technologist (an implementer or a maintainer of IT systems), the strategic importance of the role certainly is diminishing. However, when hospitals provide their CIOs with the opportunity to get involved and stay involved with the healthcare processes, and then leverage that knowledge with their technology know-how, the strategic importance of the CIO will increase. As the pace of change in healthcare quickens, organizations must not only be agile and adaptive, they must have quality too. The CIO as strategist can help organizations meet these demands. A Gartner Group study of more than 1,000 CIOs revealed that two-thirds of CIOs felt their jobs might be at risk because they did not deliver expected value on IT investments. This shows that IT is already recognized as strategic for the organizations (Gartner, 2005). There is some recognition that buying “best-of-breed” solutions is not an easy task in healthcare. Many Healthcare IS departments have been spending tremendous amounts of time, money, and energy trying to integrate disparate legacy (and new) solutions. As IS departments struggle to meet new business demands in such an ad hoc fashion, they become trapped by technical limitations (many even without business relevance) and are less and less able to respond effectively. Yet, surprisingly, healthcare organizations continue to invest in these “stand-alone” solutions. Cost containment and revenue growth are top priorities today, as is the ability to be more responsive and to adapt as necessary. Stand-alone systems do not help to meet these priorities. Clearly, CIOs must find a way to achieve better visibility in the organization, from operational execution (for instance, helping physicians and nurses improve their productivity while making fewer mistakes) to overall performance (new services deployment and patient satisfaction). A proactive CIO is expected, mostly in healthcare. This CIO should promote the development of effective knowledge processes and systems that are able to collect data from any source and deliver timely and accurate information to the right healthcare professional. The challenge for many CIOs, then, is to lead their hospital’s efforts to get real benefits and value from the use of IS investments. To meet current requirements with more agility facing the complexity of the healthcare industry, the CIO needs to learn to collaborate with his fellow managers, avoiding isolated approaches of the past and accepting that the risk of innovation in healthcare
March 15, 2010
38
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
is the only way to provide long-term quality improvements. This also means that proper implementation assessment is required to sieze the information that allows the IS team to evolve and be able to innovate in a routine manner. CIOs should move to more systemic approaches that can collect all business processes on an integrated platform (one that uses the same data for analysis and reporting) in order to have a 360◦ view of the organization business. Investment in integrated platforms is critical so that data can be used to respond to industry obligations and regulations faster and more efficiently, and ensure consistent, high-quality business performance management. Regulation in Europe is somewhat behind that of the USA. The USA launched the Health Insurance Portability Accountability Act (HIPAA) almost seven years ago. With the implementation of HIPAA, all healthcare organizations in the USA that handle health information must conform to national standards for the protection of individually identifiable health information. Despite this positive perspective of the CIO’s role, the actual reality is not as good, but there are some windows of hope. Lap˜ao (2007b) found that the best performing hospital information systems (HIS) departments were linked with department heads having characteristics that matched those attributed to a CIO (Broadbent and Kitzis, 2005): • The best performing HIS directors have a university degree and post-graduate training (not necessarily in HIS). • They are clearly open to others’ suggestions and have an excellent relationship with other healthcare professionals (they also show a rather dense social network within the organization (Lap˜ao, 2007a). • They show leadership skills, which help them organize their department to better answer the challenges. • They have meaningful negotiation skills which they use regularly in their relationships with the vendors, showing openness to bolder projects with new technologies. • They plan their work and at least they implement it on a “draft” of a HIS strategic roadmap. • They demonstrate clear awareness of the barriers, difficulties, and of complexity of the tasks. • They also look for opportunities to improve their hospital with partnerships (Universities, Public Administration, other Hospitals, Vendors, etc.), i.e., they build up a good external social network. This means that CIOs should be highly qualified persons in order to be able to deal with the challenges of both technology and business issues (Figure 4). CIOs are special people (Ash et al., 2003) who cope with the endeavor of pushing the organization further through an innovative use of technology. They know that pushing for interoperability will allow the organization to be more productive and efficient.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 39 CIO’s Role
Healthcare System Strategic Leadership Technology Standards Awareness
-- -
Sponsor Healthcare Professionals network Clinical Services
Support Project and Change Management Develop Team Building
Promoting
Promote Technology and Management
Interoperability
-- -
CITIZENS
Skills internally Enhance communications with customers Technology Suppliers Management
IT
=
Management Systems
Sponsor Negotiations Foster Cooperation with Experts
ICT SUPPLIERS
Figure 4.
CIO’s role in promoting interoperability within the healthcare system.
5. The Role of the CIO: The Case of Hospital S˜ao Sebasti˜ao There are already a few examples of CIOs that give us not only the hope that things are going to change but also facts about the effective role of CIOs in healthcare. Below we present the case of the Hospital S˜ao Sebasti˜ao (HSS) at Santa Maria da Feira (in the north of Portugal), and how its CIO has been evolving in recent years and also provide an estimation of value created by his performance. 5.1. The HSS Information System The HSS is located about 30 km south of Oporto (Portugal) and provides services to a total population of about 3,83,000 people. HSS is a 317 bed, acute care and trauma facility built in 1999. From the very beginning, hospital planners wanted an organization that would be a significant cut above other public health hospitals in Portugal. Since there was an opportunity to build a new hospital from scratch there was also an opportunity to make it as good as possible in terms of architecture and technology. As a result, HSS has been a well-equipped hospital from the very beginning. Rather than just buying an expensive (and at the time, insufficiently sophisticated) commercial HIS, the CIO Mr. Rui Gomes and his dedicated staff of 11 full-time employees embarked on an endeavor to build an interoperable platform grounded on the IT infrastructure, business, and clinical applications that would best serve the hospital and the people who worked there. The CIO promoted the conditions (a small set of rules) that allow the projects development to be done iteratively. From the results point of view, this appears to be serving the hospital
March 15, 2010
40
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
workers and patients extremely well. Mr. Gomes is very dedicated, communicative, eager to help his staff to become better professionals and accept new ideas coming either from the other managers or from the physicians, which makes him a “special person” (Ash et al., 2003). As any CIO needs some help from the medical side, Mr. Gomes fortunately had an important, and indispensable, help. Among the team of physicians, Dr. Carlos Carvalho has been a visionary for almost a decade. He provided an important contribution to Mr. Gomes that was able to join together both the clinical perspective and business perspective at the same time. HSS is a hospital where one can see the physicians using information technology seamlessly. They feel like true owners of the systems, since they have been part of the designing process from the very early stages of the HIS implementation. This is all the more remarkable when one considers that they built, by themselves, the interoperable platform interlinking solutions, using a “best-of-breed” approach. More impressive is that they did so using commodity software (made available by the Portuguese Healthcare Systems Agency) that costs just pennies on the dollar compared to equivalent solutions used in the US and European hospitals. One might find the physicians roaming in the halls with “Fujitsu Tablet PCs” while wirelessly connected to the hospital’s network. One feels that the hospital is a living innovation center. Among other functionalities, the physicians have complete access to all patient data including imaging, lab results, etc., and they are able to perform all their charting from admission to discharge electronically. Nurses and other caregivers can also access the HIS. Additionally, in the emergency room, an electronic triage system (using a Manchester Protocol-based algorithm) not only helps to prioritize treatment, but also times and tracks exactly how that treatment is delivered; sending gentle reminders to staff whenever patients are left waiting longer than necessary. One might discover that the HSS system does not have all the frills that might be found on large vendor solutions used in many American hospitals (e.g., Eclipsys, McKesson, GE/IDX, Cerner, etc.). The CPOE is still a work in progress, although physicians are already using an electronic prescribing solution extensively. Today this electronic prescribing solution has many practical features and is used all over the country. It was broadly deployed as a strategy to reduce and control medicine costs. This is precisely the point. In a complex environment, the system needs to be designed to do exactly what the staff really needs most and be flexible enough to cope with future evolutions. It has an interface and tools that make it intuitive, fast, and highly functional. Perhaps that is why the almost “home-built” HIS solution in use at HSS is so popular with physicians and other caregivers at the hospital. 5.2. The Deployment of the HSS Interoperable Platform: The Role of the CIO The HSS’ HIS architecture was conceived with a healthcare interoperability framework in mind right from the beginning. It is a web-based system that is built on the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 41
interconnection of various clinical and management systems in a “best of breed” style. Currently it does not fully use HL7 to make solutions interoperable, but a new version is in preparation that copes with the interoperability standards, particularly considering the deployment of SOA architectures, and thus deploying a complete Healthcare Interoperability Framework, combining MDA and SOA. The CIO was the engine of the HIS development. His strong links with the CEO and the Board assisted the decision-making process and the alignment between Board strategy and HIS investment allocation. The capability to look at the future envisaging that there was an opportunity to develop an HIS with the software and technology at hand and cope with the lack of financial resources is the proof that a true CIO can provide value to his organization. Mr. Gomes started by envisioning the existence of the hospital healthcare interoperability platform, allowing the flow of data across the disparate applications as a key milestone for the deployment of the information systems. However, his perspective went beyond the technological perspective only, i.e., simply connecting the applications through APIs. Indeed, he developed the business context for the healthcare interoperability platform jointly with Dr. Carvalho, a physician enthusiastic about new ways of working enabled by technology. In reality, they actually began by creating a CIM — despite not calling it as such — where all the business, clinical, and administrative processes were designed “as-is” and “as-should be”. At this stage they were completely removed from any technology and relied much on the existing literature and on-site visits about how other leading-edge hospitals were working. Inspired by Mr. Gomes’ ideas, Dr. Carvalho had a critical role in the CIM development. He was the driver for all the required modeling activities, motivating and encouraging his colleagues to participate in the effort. Despite the fact that the project was led by the CIO, Dr. Carvalho perfectly accepted his role in the whole project: the facilitator and leader for the non-technological activities. The pace was set by Mr. Gomes but Dr. Carvalho could easily accommodate the changes and different swings the project experienced. Without question, Dr. Carvalho was determinant in helping to motivate his fellow physicians and the remaining administrative and management staff on a team project developed in conjunction with Mr. Gomes, to model the processes along with the visioning of how the processes should exist in an interoperable future. As the CIM was being completed, the CIO role changed. The CIO became more technical and a leader of his 11 technological team members. Mr. Gomes understood the importance of developing an interoperability platform that was not focused on existing applications. His perspective was clear: the healthcare sector is highly dynamic and innovation prone, and the ICT infrastructure must be able to accommodate the changes in technology, applications, processes, etc. In this sense, his strategy was to develop the HSS interoperability platform, decoupling as much as possible from the existing technology. In reality, an ICT architecture was designed, allowing independence from the existing applications, i.e., a PIM. At the time, his major concern was not the interoperability with applications from
March 15, 2010
42
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
outside the HSS, but the possibility to easily make the information systems fully interoperable as new applications were deployed and replaced or complemented existing ones, and thus be able to accommodate the dynamics of any healthcare ICT infrastructure. In this phase, the role of Mr. Gomes in convincing the Board for the necessary resources, time, and some “patience” regarding the deployment of the whole information system was important. His ability to explain the decision-makers about the innovative and importance of the approach being made was critical to sustain some of the inevitable delays that occurred. Also, once again, Dr. Carvalho was a major supporter of the healthcare interoperability framework being developed for the platform. Despite not being an ICT expert, Mr. Gomes’ arguments were explained to Dr. Carvalho and he knew too how often the deployment of new applications could bring several problems because of the difficulty of integration with existing ICT infrastructure. As the PIM was completed, Mr. Gomes’ team developed the necessary code to implement the PSM, i.e., interoperability platform. Interoperability is achieved via a set of database mechanisms (using Microsoft BizTalk) that follow sophisticated data model, very well linked to the clinical processes. The HIS datacenter at HSS is based on a Microsoft architecture that includes Active Directory, SQL Server 2005, SharePoint Services, SQL Reporting Services, Balanced Score Card Manager, ISA Server, BizTalk, Exchange, .Net Framework, and Visual Studio 2005. All these licenses are made available free of charge through a government partnership with Microsoft. Mr. Gomes managed to build the HIS out of these packages, adding the important participation of his colleagues at the hospital. This makes him “special” and provides a reason to believe that the HIS development is more dependent on CIO leadership than on technology. Moreover, there is no need to be frightened by the uncertainty of the environment or the complexity of the healthcare systems. One needs to look ahead with the need to bring the best people (physicians, nurses, managers, technicians, suppliers, students, professors, etc.) into a large team that will provide the best knowledge and the energy to design, implement, and correct the HIS according to the clinical needs of healthcare professionals. 6. Conclusions and Challenges Ahead The healthcare sector around the world, and particularly in Portugal, is undergoing great transformation. With the emergence of intensive diagnosis, clinical, and business applications, information systems development will require more careful planning. Moreover, for most healthcare units, it is necessary to integrate existing legacy systems with the new software being deployed. This task can be achievable through the careful planning of a healthcare interoperability framework (HIF) that distinguishes between the business, clinical, and administrative processes (CIM layer), the interoperable mechanism independent of the technology (PIM layer) and finally the coding of the APIs themselves (PSM layer). This is a
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 43
highly sophisticated technical approach that requires highly qualified and expert ICT professionals.Yet the role of the “personnel from informatics” is changing considerably, shifting away from a pure technical profile to a top-level management function. Indeed, possibly the greatest threat to the successful implementation of healthcare interoperable systems is the lack of understanding of the importance of the issue, leading in turn to overall lack of coordination and the absence of a consistent framework for the implementation of integrated personal-care records. Given the amount of public and private financial funds currently being invested and the tight timescales for delivery of objectives, the absence of overall coordination of these programs presents a major risk not only to the strategy to develop integrated personal-care records but also to the motivation of management. The CIOs can therefore play an important role, giving leadership and support to help follow the regulators’ rules and to manage the implementation processes according to the organizational culture. The CIO’s role is becoming increasingly important as information systems development moves away from being essentially a technical/technological problem, having a stronger grounding in the business context. The MDA and SOA approach for interoperability requires a more holistic methodology for IS development. Thus, rather than having fragmented specialized applications that are installed on very localized clinical or administrative functions, interoperability and service architectures require horizontal knowledge of the organization. This can only be achieved by a professional who is able to bridge the complex relationships that evolve in any healthcare unit. The HSS case is a reason to believe that one should be aware that the HIS success depends largely on the CIO’s leadership and interoperability vision. The success does not necessarily rely on buying expensive and sophisticated commercial applications, but rather on having a coherent ICT vision, and being able to involve and commit all stakeholders to the deployment and use of the healthcare information systems. Although the role of the CIO is becoming clearer and more widely accepted across healthcare units, and interoperability frameworks are being recognized as the grand technical challenge in the coming years, other challenges are yet to be fully understood. The way organizations deal with complexity is one such challenge. Theoretical and empirical work has proved the importance of modeling organizations using complexity theory.Yet, as far as information systems are concerned, and specifically with regard to the conception of interoperable systems, it is important to understand how complexity can improve the design of interoperable information systems, and how the CIO can further improve the efficacy of ICT, by addressing some of the features of complexity. This means that the traditional top-down approach for designing and managing ICT may be questioned, and the need to achieve self-developed and self-configured interoperable information systems that respond with effectiveness to the dynamics of complex systems like a healthcare unit may move to the forefront.
March 15, 2010
44
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
References aecXML (2006). Retrieved March 22, 2006, from http://www.iai-na.org/aecxml/mission. php. Ash, JS, PZ Stavri, R Dykstra and L Fournier (2003). Implementing computerized physician order entry: The importance of special people. International Journal of Medicine Informatics, 69, 235–250. Berre, A (2002). Overview of International Standards on Enterprise Architecture (SINTEF). Bohms, M (2001). Building Construction Extensible Markup Language (bcXML) Description: eConstruct bcXML. A Contribution to the CEN/ISSS eBES Workshop. Annex A. ISSS/WS-eBES/01/001. Boissier, R (1995). Architecture solutions for integrating CAD, CAM and machining in small companies. IEEE/ECLA/IFIP International Conference on Architectures and Design Methods for Balanced Automation Systems (Chapman & Hall, London), 407– 416. Broadbent M and Kitzis ES (2005). The New CIO Leader: Setting the Agenda and Delivering Results, Harvard Business School Press. CEN/ISSS (2006). European Committee for Standardisation — Information Society Standardization System, retrieved March 22, 2006, from http://www.cenorm.be/isss. DATAFORM EDIData (1997). UN/EDIFACT Release 93A. ENV 13 550 (1995). Enterprise Model Execution and Integration Services (EMEIS), CEN, Brussels. Gartner (2005), Gartner Survey of 1,300 CIOs Shows IT Budgets to Increase by 2.5 Percent in 2005; Gartner Inc. January 14. Healthcare Services Specification Project (2007). HSSP Healthcare Standards Report 2007, retrieved September 23, 2007, from, http://hssp.wikispaces.com/. Healy, JC (2000). EU-Canada e-Health Initiative. EU-Canada Meeting, Montreal, Quebec, Canada. IOM Report (1999). To Err is Human, Institute of Medicine. IOM Report (2001). Crossing the Quality Chasm: A New Health System for the 21st Century, Institute of Medicine. Jardim-Goncalves, R and A Steiger-Garc˜ao (2002). Implicit multi-level modeling to support integration and interoperability in flexible business environments. Communications of ACM, Special Issue on Enterprise Components, Services and Business Rules, 53–57. Jardim-Gon¸calves, R, A Grilo and A Steigger-Gar¸ca˜ o (2006). Developing interoperability in mass customisation information systems. In Mass Customisation Information Systems in Business, Blecker, T and Friedrich (eds.), Idea Group Publishing, Information Science Publishing, IRM Press. Kauffman, S (1998) At Home in the Universe: The Search for the Laws of Self-Organization and Complexity, OUP. Kuhn, KA, DA Giuse, LV Lap˜ao and SHR Wurst (2007). Expanding the scope of health information systems: From hospitals to regional networks, to national infrastructures, and beyond. Methods of Information Medicine, 47. Lap˜ao, LV (2007a). Smart Healthcare: The CIO and the Hospital Management Model in the Context of Complexity Theory, Doctoral Dissertation.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 45
Lap˜ao, LV (2007b). Survey on the Status of Portuguese Healthcare Information Systems. Methods of Information in Medicine; HIS Special Issue. Lap˜ao, LV, RS Santos and M G´ois (2007). Healthcare Internet Marketing: Developing a Communication Strategy for a Broad Healthcare Network. Proceedings of the ICEGOV 2007, Lisbon. Lenz, R and KA Kuhn (2002). Integration of Heterogeneous and Autonomous Systems in Hospitals, Data Management & Storage Technology. Lorenzi, NM et al. (1997). Antecedents of the people and organizational aspects of medical informatics: Review of the literature. Journal of American Medicine Information Association, 4, 79–93. Mango, P and L Shapiro (2001). Hospital gets serious about operations. The McKinsey Quarterly Number 2. MDA (2006). Model Driven Architecture, MDA Guide Version 1.0.1, June 2003, retrieved March 23, 2006, from http://www.omg.org/mda Mellor, S. (2004). Introduction to Model Driven Architecture. ISBN: 0-201-78891-8, Addison-Wesley. Miller, J and J Mukerji (2001). Model Driven Architecture White Paper, retrieved March 23, 2006, from http://www.omg.org/cgi-bin/doc?ormsc/2001-07-01. Nagi, L (1997). Design and Implementation of a Virtual Information System for Agile Manufacturing, IIE Transactions on Design and Manufacturing, special issue on Agile Manufacturing, 29(10), 839–857. Plsek, P and T Wilson (2001). Complexity sciences: Complexity, leadership, and management in healthcare organisations. BMJ, 323, 746–749. Pugh, S (1997). Total Design: Integrated Methods for Successful Product Engineering, Adddison-Wesley, Wokingham. Smith (1997). Internet Marketing: Building Advantage in a Networked Economy, McGraw-Hill. SOA (2006). The Service Oriented Architecture, retrieved March 23, 2006, from http://msdn.microsoft.com/architecture/soa/default.aspx Umar, A. (1999). A framework for analyzing virtual enterprise infrastructure, in Proceedings of the 9th International Workshop on Research Issues in Data Engineering — IT for Virtual Enterprises, RIDE-VE’99, 4–11, IEEE Computer Society. Vlosky, RP (1998). Partnerships versus Typical Relationships between Wood Products Distributors and their Manufacturer Suppliers. Forest Products Journal, 48(3), 27–35. W3C (2009). World Wide Web Consortium, retrieved June 2009, from http://www.w3c.org. WS-I (2009). Web Services Interoperability Organisation, WS-I, retrieved June 2009, from http://www.ws-i.org.
Biographical Notes Ant´onio Grilo holds a PhD degree in e-commerce from the University of Salford, UK. He is Auxiliar Professor of Operations Management and Information Systems at the Faculdade Ciˆencias e Tecnologia da Universidade Nova de Lisboa, in doctoral, master and undergraduate degrees. He is also a member of the board of directors of the research center UNIDEMI. He has over 30 papers published in international conferences and scientific journals, and he is an expert for the European Commission
March 15, 2010
46
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
DG-INFSO. Besides academia, he has been working in the last 10 years as a management information systems consultant, particularly in e-business, e-commerce and project management information systems. Currently he is a Partner at Neobiz Consulting, a Portuguese management and information systems company. Lu´ıs Velez Lap˜ao has a degree in Engineering Physics from the Lisbon Institute of Technology, an MSc in Physics from the Technical University of Lisbon (TUL), an MBA in Industrial Management and a PhD in Healthcare Systems Engineering from the TUL. He has a post-graduate degree in Public Management from the John F. Kennedy School of Government, Harvard University. He is the Head of Health IT Governance Systems, at INOV-INESC Inova¸ca˜ o and he is now the National Coordinator for Primary Care Management training. Professor and Researcher at Center for Healthcare Technology and Information Systems at the Oporto Medical School. Vice-President of AGO, Garcia-de-Orta Association for Development and Cooperation. He is also the Portuguese representative at the International Medical Informatics Association, since 2005 and FP7 Expert. He is a Healthcare Management visiting Professor at Dubai University, United Arab Emirates. Ricardo Jardim-Goncalves holds a PhD degree in Industrial Information Systems from the New University of Lisbon. He is Auxiliar Professor at the New University of Lisbon, Faculty of Sciences and Technology, and Senior Researcher at UNINOVA institute. He graduated in Computer Science, with MSc in Operational Research and Systems Engineering. His research activities include Standardbased Intelligent Integration Frameworks for Interoperability, covering architectures, methodologies and toolkits to support improved development, harmonisation and implementation of standards for data exchange in industry, from design to e-business. He has been a technical international project leader for more than 10 years, with more than 50 papers published in conferences, journals and books. He is the project leader in ISO TC184/SC4. V. Cruz Machado is an Associate Professor with Habilitation in Industrial Engineering and researcher in the Department of Mechanical and Industrial Engineering in the Faculty of Science and Technology of the New University of Lisbon, Portugal. He is the director of UNIDEMI (Mechanical and Industrial Engineering Research Centre), director of the Industrial Engineering and Management graduation programme, and president of the Portuguese Chapter of the Institute of Industrial Engineers. He holds an MSc and a PhD in Computer Integrated Manufacturing, from the Cranfield University, UK.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Chapter 3
Information Systems for Handling Patients’ Complaints in Health Organizations ZVI STERN∗ , ELIE MERSEL† and NAHUM GEDALIA‡ Hadassah Hebrew University Medical Center P.O.B. 12000, Jerusalem 91120, Israel ∗
[email protected] †
[email protected] ‡
[email protected]
An essential and inherent part of any managerial process is the monitoring and feedback for all the organization’s activities. Every organization needs to know if it is acting in an effective way and if its activities are accepted by their recipients the way they are intended to. The handling of complaints is designed to prevent the reoccurrence of similar incidents in the future and to improve the production of the organization. Furthermore, the handling of complaints from the public is important in tempering the bitter feelings and sense of helplessness of the citizen vis-`a-vis bureaucratic systems. One of the various tools available to managers in the organization for obtaining this much needed feedback on the organization’s activities is via complaints from the public. The mechanism for handling complaints from the public and responding to them is generally headed by an ombudsman. Managing information received from complaints and transforming it into knowledge in an effective way requires the database to be: complete, up-to-date, versatile and — most importantly — available, accessible, and practical. For this purpose, a computerized system that is both user-friendly and interfaces with existing computerized demographic and other databases already existing in the hospital is essential. This type of information system should also assist in the ongoing administrative management of complaint handling by the ombudsman. In this chapter, we will examine the importance of the ombudsman in public and business organizations in general and in health organizations in particular. The findings presented in this chapter are based on a survey of the literature, on a study we conducted among the ombudsmen and directors of all 26 general hospitals in Israel, and on the authors’ cumulative experience in management, in complaint handling and in auditing health systems, as described in case studies. These findings will illustrate how it is possible to exploit a computerized database of public complaints to improve various organizational activities, including upgrading the quality of service provided to patients in hospitals. Keywords: Ombudsman; patient representative; patients’ rights; patients’ complaints; hospital; healthcare system; quality assurance.
47
March 15, 2010
48
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
1. Introduction An inherent part of any managerial process is the monitoring and feedback of the organization’s activities. One of the various tools available to managers in the organization for obtaining feedback on the organization’s activities is via complaints from the public. The mechanism for handling complaints from the public is generally headed by an ombudsman. “Ombudsman” is derived from a Swedish concept meaning “representative of the king.” The ombudsman’s role is to serve as a voice for consumers vis-`avis the organization, to provide feedback on the organization’s activities and to constitute one of the catalysts for organizational change and improvement. For the organization, the ombudsman comprises part of the mechanism of internal oversight, serving as a channel for receiving feedback on the activities of various teams, individuals, and functions in the organization, and the way in which these activities are perceived by the consumers. The more powerful the bureaucratic mechanism, the more important the ombudsman’s role becomes for the individual. Thus, an effective ombudsman is especially important in the health system because patients are particularly dependent on the system. Furthermore, with an aim toward enhancing the safety and efficiency of treatment, and in light of skyrocketing insurance premiums for medical malpractice, health systems have come to appreciate the importance of relying on complaints as one of the instruments for improving quality assurance and patient safety — and it is even recommended to encourage patients to complain. In addition, complaints from the public serve as one of the sources of information for the organization’s system of risk management. In this chapter, we will examine the importance of the ombudsman in public and business organizations in general and in the health system in particular. We will note the importance of handling complaints as an instrument for promoting quality assurance and enhancing medical treatment, and we will see how it is possible to use information systems to help improve the effectiveness of the ombudsman’s work, with specific reference to health systems. 2. Research Methodology The findings presented in this article are based on a survey of the literature and on a study conducted among the ombudsmen and directors of all 26 general hospitals in Israel. For this purpose, personal interviews were conducted using a very detailed and lengthy questionnaire. The data from all these interviews were accumulated and analyzed by statistical methods using SPSS. These findings were then analyzed through the authors’ cumulative experience in management, in complaint handling, and in auditing health systems. The findings are also based on a series of case studies and their findings that illustrate how it is possible to exploit a computerized database of public complaints
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 49
to improve various processes, including an upgrade in the quality of service provided to patients in hospitals. 3. Theoretical Framework 3.1. The Institution of the Ombudsman The institution for handling complaints — the ombudsman — developed in states whose social and political systems place a high value on the individual’s rights. Two decades after the French Revolution, which brought into sharper focus the concepts pertaining to the individual’s place in the society, the first ombudsman institution was established in Sweden in 1809. Since then, and particularly during the second half of the 20th century, this institution has become an integral and built-in part of the state’s institutional fabric. The need for the existence of the ombudsman was highlighted as public administration became involved in nearly every facet of the individual’s life and in fulfilling the individual’s most basic needs: health, education, and social security. The need grew as the administrative involvement led to the concentration of enormous power in the hands of the government bureaucracy vis-`a-vis the individual. During recent decades, we have witnessed an increase in the number of states that established the institution of an ombudsman. Much of this increase comes from the post-communist states, which are facing complex difficulties that stem from the gap between their political culture and tradition, on one hand, and the constitutional frameworks that define the status and authorities of their new institutions, on the other hand (Dimitris and Nikiforos, 2004). The ombudsman institutions in different countries vary in the way they are appointed, in their subordination and in the scope of their authority. For example, in the Scandinavian model, the ombudsman is authorized to examine topics upon his own initiative, and not only those which reach him following complaints (Hans, 2002); in the Israeli model, the role of the ombudsman is integrated with the role of the state comptroller; according to the British model, a direct complaint to the ombudsman is only possible via a member of Parliament, while in the French model the ombudsman, or le M´ediateur de la R´epublique, is appointed by and is a subordinate to the head of the executive authority. As in the case of state ombudsmen, various public and business organizations have defined different forms of subordination, authorities, and reporting arrangements for the ombudsmen operating within their organizations. It is possible to define three main types of ombudsmen (Ben Haim et al., 2003): 1. The Classical Ombudsman: Usually appointed by the parliament and reports to the body that appointed him. Handles, by law, complaints pertaining to an action or omission of the executive authority vis-`a-vis the citizen. 2. The Specialty Ombudsman: Usually appointed by a government regulator and investigates complaints in a defined field, generally pertaining to several organizations.
March 15, 2010
50
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
3. The Organizational Ombudsman: Usually appointed by the management of a specific organization and operates within its framework. An organizational ombudsman has less independence than a classical ombudsman or a specialty ombudsman. Nonetheless, the common denominator between all of these types of ombudsmen is their work in dealing with complaints from the public and helping to solve problems arising from the relationship between the individual and the bureaucratic mechanism of the organization. Thus, the main characteristic of the institution of the ombudsman is that he is complaint-driven. This means that smart management of complaint handling is the key to the ombudsman’s effectiveness (Marten, 2002). The handling of complaints is designed to prevent the reoccurrence of similar incidents in the future and to improve the production of the organization. Furthermore, the handling of complaints from the public is important in tempering the bitter feelings and sense of helplessness of the citizen vis-`a-vis bureaucratic systems (Nebantzel, 1997). Part of the citizen’s calmness and sense of well-being derives from his knowledge that the society in which he lives acts fairly towards its members. The fact that the organization is public-oriented — that there is a specific office (the ombudsman’s office) headed by a senior official and designed to stand up for the individual and protect him from the organization itself — can provide the individual with a sense of security. This sense of security does not have to be exercised to be justified. That is, even if the individual, for his own reasons, refrains from complaining, he may feel a sense of relief from the very fact that he could have complained if he wanted to do so. The importance of this feeling for the individual increases in direct relation to the growing power of the organization he faces and the extent of its influence on his basic needs. 3.2. The Ombudsman in the Health System as a Representative of the Patients Healthcare services differ from other services available in the marketplace because of the existing potential for ever-increasing demand. The demand is a function of a number of factors: growing public awareness and knowledge of health issues, new developments in medical knowledge and technology, changes in morbidity patterns, and the effects of a prolonged life expectancy (that is, an aging population). Another contributing factor is the rise in the standard of living, which has brought about a rise in consumer awareness and action, accompanied by expectations for higher standards of service. The growing demand on the part of the public requires managers of healthcare delivery systems to economize to meet the demand. At the same time, growing consumer awareness requires more in-depth scrutiny of the quality of the services provided. In view of these developments, it is evident that quality improvement should incorporate consumer feedback as an integral part of quality assurance.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 51
At the same time, particularly in the health system, the individual is very much dependent on the organization that provides the service. This dependence stems from gaps in knowledge between the caregivers and the patients, as well as the physical and emotional situation of those in need of medical assistance. This dependence is particularly strong in the contact between the patient and the hospital, where some of the patients are unable to tend to their affairs by themselves and have difficulty in finding their way through the Web of administrative and medical procedures and rules. The hospitalization period is generally short, only a few days. However, these are difficult and complex days, and sometimes they are the most complicated days of the entire period of illness. This underlines the importance of the ombudsman’s role as a representative of the hospitalized patients. 3.3. The Ombudsman in the Health System as a Source of Feedback on the Implementation of Health Policy Health was perceived in the distant past as a personal matter for which the individual was exclusively responsible. The concept of the right to health and the obligation of the state to maintain and promote this right began to develop during the 19th century and became formally established in the 20th century in the framework of international conventions and legislation in many countries. This includes the International Covenant on Economic, Social and Cultural Rights (1996) that recognizes the right of every person to enjoy the highest possible standard of physical and mental health. To exercise this right, states are expected to create conditions that ensure medical services for all (Carmi, 2003). Various arguments served to justify recognition of the right to health, such as the right to life, which implies the state’s duty to work for the health of its citizens, or the state’s obligation to enable its citizens to live in dignity. In addition, we also note the Universal Declaration of Human Rights (1948), which recognizes the right of all persons to an adequate standard of living, including medical care; and the European Social Charter (1965), which stipulates that those who do not have adequate financial resources will also be entitled to medical assistance. The charter states that, to ensure the exercise of the general right to protect health, various measures should be taken, such as removing the causes of ill-health, providing advice on promoting health, and preventing contagious diseases. The African Charter on Human and Peoples’ Rights (1986) states that every person has the right to enjoy the highest possible level of physical and mental health. This charter obligates the states to provide medical care to the ill and protect the population’s health. The Cairo Declaration (1990) recognizes the right of every person to medical care, within the constraints of existing resources. The International Convention on the Protection of All Migrant Workers and Members of the Their Families (1990) states that they are entitled to receive medical care that is urgently needed to save their lives or prevent irreparable damage to their health, and that this care should be provided on the basis of equality with the nationals of that state (Carmi, 2003).
March 15, 2010
52
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
In parallel, various states have passed legislation concerning a citizen’s right to receive medical treatment and the state’s obligation to provide suitable health services for all citizens, including those who have difficulty in financing them (van der Vyver, 1989). In Israel, the National Health Insurance Law, enacted in 1994, states that every resident is entitled to health services. This law mandates automatic coverage of health services that are defined in a “health basket” for all residents of the state, regardless of their ability to pay. Another global development involving the expansion of the obligations of the state and health organizations toward patients has occurred in the area of patient rights (Gecik, 1993). In 1793, France issued a decree stating that every patient hospitalized in a medical institution is entitled to a bed. (Until then, two to eight patients had shared a single bed; Reich, 1978.) Some regard this decree as the first document in the area of patient rights. During the second half of the 20th century, the issue of protecting the patient’s various rights — including the subject of patient consent, the right of the patient to privacy, the confidentiality of medical information, etc. — became incorporated in international conventions including (Carmi, 2003): the Council of Europe’s order of the 1950; the Lisbon Declaration of 1981 issued by the World Medical Association (amended in 1995) (Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine, 1997); the UN General Assembly Resolution 48/140 of 1993 (Resolution 48/140 on Human Rights and Scientific and Technological Progress); and the Council of Europe’s 1997 Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine. Subsequently, various states enacted laws designed to establish the status and rights of the patient. The first to do so was Finland in 1993 (Pahlman et al., 1996), thus preceding the rest of the countries of Europe (Carmi, 2003). In Israel, the Patient’s Rights Law was enacted in 1996. Among its other provisions, the law stipulates that a person be appointed in every medical institution to be responsible for handling the complaints of patients — the ombudsman. The monitoring and regulation of the rights of citizens and patients, and the state’s obligations toward them, were assigned to bureaucratic mechanisms. However, the pace of development of expensive medical technologies, the ageing of populations, and the limited economic abilities of the various states to cope with these changes have made it necessary to ration the provision of health services (Nord, 1995). The need to conduct this rationing further empowers the bureaucratic mechanisms and enables them to influence the quality of life of millions of patients. The conflict between the limited resources available to the state and the state’s obligation to provide health services and the right of citizens to receive these services poses very difficult ethical dilemmas in defining priorities when budgeting health services, including the subsidization of medication, in all welfare states. Western states are wrestling with the question of the correct model for rationing health services and have also defined various rules for this purpose.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 53
In light of these trends, the ombudsman’s role has become even more important as a source of feedback on the implementation of the defined policy and its effectiveness. 3.4. The Ombudsman in the Health System as an Instrument for Enhancing the Quality of Medical Care With an aim toward enhancing the safety and efficiency of treatment, and in light of skyrocketing insurance premiums for medical malpractice, health systems in the Western world have come to appreciate the importance of relying on complaints as one of the major instruments for improving quality assurance and safety, as well as a source of important information for the system of risk management (Hickson et al., 2002). In accordance with this perception, it is even recommended to encourage patients to complain (Sage, 2002). In 1993, Leape et al. (1993) showed that a million avoidable medical errors resulted in 120,000 annual deaths. Most of the medical mishaps do not result in a complaint. But, it is possible to learn from the complaints that are submitted about significant errors that could have been prevented. Dealing with complaints provides a “window” for identifying risks to the patient’s well-being (Bismark et al., 2006). An ombudsman can be an effective tool for assistance and education in the health system. He can provide precise and real-time information to the consumer, to the health services provider, to health policymakers, and to relevant legislators — with the aim of improving the entire health system (William and Bprenstein, 2000). For example, handling complaints received by patients pertaining to a medical record led to an improvement in the completeness and quality of the medical record and, consequently, to fewer potential errors (Brigit, 2005). Another example is the activities of the ombudsman of the Ministry of Health in Israel, which led to including new medications and medical technologies in the public “medications basket” (Israel Ministry of Health, 2006). Donabedian (1992) assigns consumers of health services three major roles: contributors to quality assurance, targets of quality assurance, and reformers of health care. As contributors to quality assurance, consumers define their view on what quality is, evaluate quality, and provide information which permits others to evaluate it. However, while patients can certainly contribute by expressing their views on subjects such as information, communication, courtesy, and environment, they usually cannot evaluate the clinical competence of the physician and his treatment — an essential component of quality which must be monitored and evaluated by the physician’s peers. Consumers can become targets of quality assurance both as co-producers of care and as vehicles of control. In the role of reformers, consumers can act by direct participation, through administrative support and political action (Javets and Stern, 1996). An important tool for quality assessment, which is based on consumers’ views and perceptions of the care and services they received, is consumers’ complaints.
March 15, 2010
54
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
Not much is found in the literature concerning the use of complaints as a tool for quality promotion. In a national survey of five self-regulating health professions in Canada, it was found that two types of activities, a complaints program and a routine audit program, were used to identify poor performers (Fooks et al., 1990). The authors expressed concern as to the public’s willingness, ability, and selfconfidence to submit complaints when poor performance is encountered. Thus, just as a high level of patient satisfaction expressed in surveys cannot be taken as the only valid indicator that medical services are of high quality, a lack or paucity of complaints does not necessarily prove either high quality of care or complete satisfaction. Nevertheless, there is wide agreement that patients’feedback should be heeded. The process of quality improvement of health care can benefit by patients’ participation in the process of its evaluation (Vuori, 1991), if only through patients’ expressions of dissatisfaction (Steven and Daglas, 1986). However, the long-term viability of any public complaint handling system rests on confidence in its fair operation. That is, “the large majority of cases investigated should provide people with assurance that they have been fairly and properly treated or that a disputed decision has been correctly made under the relevant rules” (National Audit Office, the UK, 2005). Patients provide important feedback to healthcare policy makers and providers by voicing their requests and complaints. This mechanism is direct by nature, pointing to problematic areas as perceived by the care receivers. As such, if indeed applied, it can serve as a monitor of the quality of care and service provided, and as a tool for risk management. A complaint-handling function such as an ombudsman in a healthcare organization is a liaison service geared to serve both the complainants and the institution. As educated and conscious consumerism increases, so does the number of complaints received. The growing volume of complaints has become a significant impetus that drives organizations to develop more effective self-corrective means as they sharpen their capability to react to complaints. In the organization, handling of complaints is performed on a number of levels: the case level, the unit level, the subject level, and the institutional level. All inquiries conducted in relation to complaints can be claimed to have a quality promotion value. Even on the case level, the discussion of the details of an event with the person who complained and with the relevant service provider has an amending effect and a potentially preventive value. On the unit level, whether medical or administrative, work is directed toward drawing conclusions and active corrective steps if recommended by the ombudsman and accepted by the head of service; the same applies for the subject level. On the general institutional level, management should be the quality-improvement and risk-management agent, as expressed by policy-related decisions and initiation of change. The handling of complaints is twofold: as a redress function vis-`a-vis the complaining customer and as a source of systemic changes for all consumers (Paterson, 2002). Both functions require a proper investigation of the incident or problem in question in order to establish the validity of the complaint. It should be emphasized
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 55
that the investigation at the point of occurrence is, by itself, a quality control operation. It is noteworthy that customers of health services are often apprehensive lest their complaints adversely affect the care they require, or may require in the future. As a result, most complaints are submitted post factum rather than at the time the problematic service is extended. Another consequence of such apprehension is the reluctance of many to complain altogether. Therefore, when a complaint is submitted in the health system, particularly in regard to the conduct or care provided by a physician or a nurse, it should be handled with extra attentiveness. The system must assume that if a patient has decided to file a complaint, it might indicate a severe problem. Moreover, the complaint may sometimes indicate that other patients have encountered a similar problem but declined the option to complain formally. As mentioned above, recurrent complaints about certain staff members, units, procedures or service processes may be an indication of a bigger problem and therefore require a more comprehensive, in-depth approach to the problem. It should be noted that even recurrent complaints about the same subject, found to be invalid, call for corrective actions. They may point to problems in communication patterns between staff and patients, or the organization and its consumers in general. Continuous Quality Improvement (CQI) may use complaints for overall examinations of the effectiveness of the process itself and, at the same time, as a tool in individual cases for higher-level improvement activities. Periodic reports of the ombudsman’s office, presenting analyses of the aggregate complaint data, provide a viable tool for overall review of service problems as perceived by the consumers. The analyses provide the relative weighting of problematic areas in terms of the type and volume of the service provided in the hospital. Such comparative analysis over time can identify new problems, recurring problems and, of course, encourage improvement. 3.5. The Ombudsman in General Hospitals in Israel The Patient’s Rights Law of 1996 (Israel Patient’s Rights Law, 1996) was enacted as a follow-up to the National Health Insurance Law (1994), which states that health services in Israel are to be based on the principles of justice, equality, and mutual assistance. As part of this fundamental belief, the Patient’s Rights Law requires every director of a medical institution to appoint an employee to be responsible for patient rights. The law defines three roles for the ombudsman — the patient rights representative: 1. To provide advice and assistance to the patient in regard to exercising his rights under the Patient’s Rights Law. 2. To receive complaints from patients, investigate, and deal with the complaints. 3. To instruct and guide members of the medical institution’s staff and management in all matters related to fulfilling the directives of the Patient’s Rights Law.
March 15, 2010
56
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
The patient rights representative thus serves as an ombudsman for the patients in medical institutions. Though the organizational ombudsman’s role may sometimes be regarded as only a facilitator of individual problem solving, in fact, the ombudsman is ideally situated within the organization to make recommendations for systemic change, based on patterns of complaints brought to his office. Indeed, the ombudsman is obligated to take steps to prevent the future recurrence of a problem, as well as to resolve the problem at hand. Furthermore, because of the ombudsman’s broad understanding of the organizational culture, the needs of its management and other stakeholders, the ombudsman office — in addition to being a vital component of the organization’s conflict management system — may also participate in designing, evaluating, and improving the entire dispute resolution system for the organization (Wagner, 2000). The authors recently published a study (Stern et al., 2008) that examined, among other things, whether ombudsmen had indeed been appointed in all of the general hospitals in Israel as required by law. The study also examined their background and daily activities. For this purpose, personal interviews were conducted with directors and ombudsmen at all 26 general hospitals in Israel. Our findings indicated that all the hospitals had appointed an ombudsman. According to the assessments of the interviewees, an average of 695 complaints per hospital is submitted each year. The interviewees estimated that the complaints that reach the ombudsman constitute 81%–100% of all complaints submitted to the hospital. The most common complaint pertains to treatment, including attitude, quality of service, and quality of treatment. There are only a few cases of complaints regarding the availability of documents, lack of information, and limited protection of patient rights. Most of the ombudsmen do not treat anonymous complaints. Some 62% of the ombudsmen noted that there is a defined procedure at their hospital for handling patient complaints. Usually, the maximum period stipulated for treating a complaint is 14–21 working days. Most of the ombudsmen said that they have full and free access to information and to the hospital’s computerized databases. However, 12.5% of the ombudsmen said that the employees of the hospital are not obligated to respond to their inquiries and questions. Most ombudsmen keep some records of the complaints received and the way they were handled. It was found that some hospitals have a high level of computerization and follow-up, which was conducted with dedicated software for complaint management. On the other hand, at most hospitals the level of computerization is relatively low and is usually based on basic Excel tables. Managing the information received from complaints has two levels of importance: 1. It allows for an ongoing dialog with the complainants, who sometimes return with the same problem and/or with similar/additional problems over time. Available information from the handling of an earlier complaint in the same area is likely to shorten the process of addressing the new complaint and improves the service provided to the patients.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 57
2. It enables the creation of a “central information database” that is regularly updated in real time for use by the organization and its various levels of management, and is based on feedback received from the recipients of the service. Managing all of this information and transforming it into knowledge in an effective way requires that the database be complete, up-to-date, versatile and — most importantly — available, accessible, and practical. For this purpose, a computerized system that is both user-friendly and interfaces with existing computerized demographic and other databases in the hospital is essential. This type of information system should also assist in the ongoing administrative management of complaint handling by the ombudsman. We will describe the operational requirements for an information system for managing patient complaints, based on the experience accumulated at Hadassah’s hospitals and also based on the survey conducted in all the general hospitals in Israel. Hadassah is a worldwide voluntary organization established and based in the United States. The organization owns two general hospitals in Jerusalem — a total of about 1,100 hospitalization beds. The hospitals are also research and university teaching centers in collaboration with Hebrew University in Jerusalem. Five academic schools operate in this framework: a medical school, a nursing school, a dental school, a school of public health, and a school of occupational therapy. The process of handling complaints at Hadassah is similar to the process that is practiced at most of the general hospitals in Israel (a total of about 14,600 general hospitalization beds). Figure 1 below describes the work process of complaint handling. Each of the stages includes substages whose connections are important for properly engineering the information system needed for managing the complaints and the extra information derived from them. The system was developed by Hadassah’s Information Systems Division with a Visual Basic development tool, using Word and Excel to produce reports. The system operates on an NT (Terminal Server) computer network under the Windows operating system, interfacing with an Oracle database. There is a plan to integrate this tailor-made system of complaint handling into Hadassah’s newly introduced ERP/SAP environment. As with other systems that contain confidential medical, personal, and private information and because of the risk that information leaked from the system could be used to harm the hospital and its employees, there is a need to incorporate particularly tight data security in this sensitive system. Hadassah has implemented such data security, including a personal identification number (PIN) generator and computerized mechanisms for PIN validation, PIN storage, PIN change or replacement, and PIN termination; input controls (check digits); field checks (for missing data, reasonableness, etc.); communication controls (including channel access controls); and database controls. For the purpose of developing the system, a process analysis was conducted. This analysis indicated that the handling of a complaint can be broken down into
March 15, 2010
58
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
Remedial action
Ombudsman
Deriving managerial information
Handling the complaint
Receiving the complaint
Patient
Medical treatment
Caregiver
Preventive action
Figure 1.
Schematic presentation of the work stages as a flow chart.
three subprocesses: the stage of receiving the complaint, the stage of treating the complaint, and the stage of deriving managerial information from it. Below is a review of each of these substages and the contribution of computerization to boosting the effectiveness of complaint handling. 3.5.1. The initial stage of handling a new complaint A complaint may be received orally or in writing. An oral complaint is usually rejected and the person is told that he must submit the complaint in writing if a formal answer is expected. A written complaint may be submitted by hand or via mail, fax, or e-mail. A notice of confirmation is given for every complaint received. Computerizing this stage, as portrayed in Fig. 2, improves the efficiency of the work processes and enhances the service provided, and can also contribute to the effectiveness of the complaint handling — both in response time and in the handling of the complaint itself. One of the goals of computerizing the handling of complaints is to manage the entire system as a “paperless office.” Therefore, it is proposed to integrate the systems of scanning the written complaint or attaching the e-mail to a “computerized folder” in which all of the information about the complaint will be managed. The index field for managing the complaints system will be the patient’s identity number (comparable to the Social Security number in the United States). This field is the same index field as in the hospital’s demographic and
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 59
Figure 2.
Screen for receiving a complaint.
billing systems, and can serve as the interface between the computerized system for complaint handing and the hospital’s ATD (Admission, Transfer, Discharge) systems. The interface will enable access to the patient’s demographic information, save redundant data entry, and improve the reliability of the database. The system immediately checks whether the patient has already complained in the past and will alert the user to this. Early detection of past complaints is also important for accessing a previous response given to this patient about the same or a different complaint, and preventing contradictory responses pertaining to the same case or person. It is also important to identify “serial complainants” who make a practice of frequent complaints. The subject and the object of the complaint must be mandatory fields in the information system. It is recommended to manage these mandatory fields within tables that are defined in advance, and it is best to minimize the entry of free text at this point to enable the sorting of information based on a common denominator. An example of a complaint categorization screen is attached as Fig. 3. On the other hand, there is room to enter free text about the impression of the complainant — whether the complainant is hurt, angry, or calm — to help identify the motivation for submitting the complaint: whether the patient is seeking to punish someone or contribute to improving the treatment or the system in the future, whether he fears the complaint will adversely affect him, and so on. Another
March 15, 2010
60
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
Figure 3.
Categorizing the subject of the complaint.
important point of information during the initial stage of receiving the complaint is whether the patient intends to file a lawsuit or has already done so. 3.5.2. The stage of dealing with the complaint The management of complaints constitutes the ombudsman’s backoffice work and includes the following stages: a decision about whom to approach; referring the matter to the person responsible and sometimes to his superiors; receiving a formal response to the complaint; deciding whether the complaint is justified or not; if possible, making suggestions for remedial action within the organization; providing a response to the complainant and notifying the person whom the complaint was filed against; when necessary, transferring the complaint to the legal counsel and/or to the risk management unit. At this stage too, we suggest that the system-enabled scanned files to be added, or files sent via e-mail as a response to a specific complaint. The contribution of computerization to this process is mainly in managing automatic reminders for monitoring the receipt of responses, documenting the answers sent to the complainant, documenting the solutions assigned to the problem and follow-up of their implementation, as in the sample screen appearing in Fig. 4. An additional contribution of the computerization of complaint handling is that the system collects the answers received from the respondents and actually combines them all in a document that comprises a basis for the printed response that is given to the complainant, as portrayed in Fig. 5. This saves the need for redundant data entry.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 61
Figure 4.
Managing the reminders.
Figure 5. Collecting the information from the complaint handling stages as a basis for the letter of response to the complainant.
March 15, 2010
62
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
3.5.3. The stage of deriving managerial information Computerization is very helpful in facilitating the dialog between the complainants and the organization that can prevent the reoccurrence of such problems and turn the problem’s solution into a remedial action at the organizational level. This capability derives from the categorizing that is executed during the preliminary stage of handling complaints and from the computer’s processing of this information after completing to deal with the complaint. The more flexible the system is in producing new and changing reports in accordance with new and changing managerial needs, the more effective the system will be as a working tool for management and as a system that provides data on which decisions can be made. The system is utilized to periodically produce different reports for various levels of management within the organization: from the director-general down to the field executives. The reports are produced according to various categories — for example, by subject of complaint, by level of justification, by physician/other staff member — by name, by department or units (frequency), list of complaints by their status (open/closed/in follow-up . . . ), various comparative reports by years. Figure 6 shows an example of the report generator screen. These reports assist the ombudsman and the management in identifying the trends and areas that have been the subject of frequent complaints. Based on this information, they can formulate intervention programs aimed at reducing or preventing the reoccurrence of such incidents. Thus, for example, the most frequent complaints focused on the following topics: the attitude of the physician/nurse/other
Figure 6.
Screen for operating the report generator.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 63
toward the complainant and/or his companion; payment — problems pertaining to payments and debts; hotel services; availability of medical documentation; waiting time in line during the appointment day; waiting time for an appointment, and the quality of treatment — medical/nursing/other. Sometimes, actions are — or should be — taken as a result of even a single complaint on a specific topic because the complaint might indicate a failure in an area that entails substantial clinical, legal, or economic exposure for the hospital. 4. Case Studies 4.1. Complaint Handling that Led to Identifying a Need for a Change in General Policy An accumulation of many complaints about faulty service in the emergency rooms, out-patient clinics, and day hospitalization prompted the management of Hadassah to declare 2007 as the year for improving the quality of all ambulatory services. Formal committees were appointed and chaired by senior members of management to define the root causes behind the complaints. The committees formulated operative proposals to resolve them and submitted them to the director-general. In 2008, various teams were assigned to work on implementing these recommendations. 4.2. Decisions or Actions Taken as a Result of Complaint Handling in Hadassah’s Hospitals that Led to Changes in Work Methods and/or in Specific Areas Below are examples of corrective actions that evolved from the complaint handling process at Hadassah’s hospitals. These examples represent the use of the information system for managing and dealing with patient complaints: locating the areas that require corrective intervention, formulating an intervention program, and its implementation by Hadassah’s management. a) For a number of years, one of the orthopedics departments drew more complaints than any other medical ward in the hospital concerning hospitalization and surgical processes. In the immediate term, each problem was solved on an ad hoc basis. On the short-term level, the ombudsman actively participated in staff meetings aimed at drawing wider conclusions. On the long-term level, a CQI program was introduced that devised and launched a new and dedicated pre-hospitalization clinic. All orthopedic patients requiring elective surgery were referred to this clinic, where they receive complete information about the entire process, including written information. Pre-hospitalization tests were coordinated for them in advance. The date of surgery was set and all arrangements were made for different possibilities of post-surgical rehabilitative care. The purpose of this clinic was to enhance the patient and insurer’s satisfaction through improved preparation of the patient and his family for the procedure ahead, to improve the efficiency of the processing of
March 15, 2010
64
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
pre- and post-hospitalization care, and to shorten the hospitalization period in the department (Javets and Stern, 1996). b) People complained that in one of the busiest clinics, patients were entering without an appointment or ahead of others who had earlier appointments.As a result, the management of Hadassah, together with the Information Systems Division, worked out a solution that included the installation of computerized screens in the waiting area that display the list of patients waiting for each physician. To protect the patient’s privacy, it was decided not to display the patient’s name, but only an identifying number. The complaints stopped and the satisfaction of the patients increased. At the same time, and without defining this as a goal of the project, the satisfaction of the caregivers also rose, because they were less frequently interrupted by patients who were angry about the waiting time and wanted to know when their turn to see the physician would come. c) A special parking area was designated for patients receiving chemotherapy and radiation treatments, and for the disabled in wheel chairs. Numerous complaints about the availability of these parking spaces led to the discovery of improper conduct: Employees arriving to work early shifts were parking in the spaces designated for the seriously-ill patients. This was in addition to the growing shortage of parking spaces for the disabled. Preventive action was taken against these employees and, at the same time, additional parking spaces for the disabled opened and a shuttle service was introduced with a special vehicle for people who have difficulty in walking from the parking areas to the hospital entrance. d) Recurrent complaints about the improper transfer of blood samples to distant and external laboratories led the hospital’s management to change the procedures for transferring blood samples. The change involved instituting new written working procedures in all of the laboratories and allocating suitable means of transportation and handling of the samples. e) Patient claims regarding lost dentures, small accidents, damage to personal possessions, and the like used to be forwarded by the hospital administration to the hospital’s insurance company agent. The process was very complicated and lengthy, causing much aggravation to the complainants. At the end of this process, in most cases, the damage was paid by the hospital itself when the claim was found to be justified. Today, in the short term, minor claims are no longer referred to the insurance company agent and are instead handled by the hospital. In the long term, a committee for handling minor claims was established by the ombudsman. The committee convenes on a flexible schedule to ensure a reply to the complainant within a short period. The committee has developed working procedures and regulations to ensure equity in its decisions about whether to compensate the complainant or to deny his claim as unjustified. Only larger claims, which comprise a small minority of the claims, are still forwarded to the insurance agent and are followed up by the ombudsman to ensure the prompt processing of the claim.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 65
f) Taking care of complaints has also led to identifying areas where instruction is required for employees or patients. Numerous complaints about discourteous behavior by admission clerks led Hadassah’s management to take action in the area of customer service training. For several months, workshops were conducted for employees on the subject of providing service, and the employees will be required to attend periodic training on this subject. g) Patients complained about not receiving proper explanations and about the correct preparations for virtual colonoscopy. Due to the lack of proper preparation, some tests were rejected. The solution was to compose one page of very clear and detailed instructions prepared by a physician and a clinical dietician who were responsible for this subject. The above description of changes introduced in the hospital as part of the CQI process illustrates the comprehensive cooperation required to ensure the success of the devised solution. While the ombudsman fills the role of problem identifier (based on the complaints he receives and analyzes with the help of dedicated information systems), the implementation of the CQI process requires the responsiveness of management, the willingness of the staff in the relevant unit to participate in the effort, and the dedication of all to promote the innovation into a continuous program of action. Moreover, the new procedures must be reevaluated and their effectiveness measured periodically, with the results of this evaluation used for further adaptation to assure the best outcomes. 5. Summary and Conclusions One of the tools available to managers for receiving feedback on the organization’s activities is a mechanism for handling complaints from the public — the ombudsman. Complaint handling comprises a primary and important instrument for receiving feedback on the activities of various mechanisms, departments, teams and individual employees in the organization, as well as feedback on the way these activities are received and perceived by the consumers they are intended to serve. In health systems, the ombudsman has a special and very important role in moderating the feelings of bitterness and helplessness of the individual patient who is in a position of inferiority because of the fact that he is ill and requires help from very professional mechanisms in large bureaucracies. In addition, the ombudsman also fills an important role in serving as a source of feedback information on the effectiveness of the implementation of health policies, and as a source of information for initiating proactive efforts to improve the quality of medical care. The computerization of public complaints and their handling enables the creation of an organizational “knowledge base” that can be used to generate numerous reports, according to various parameters. This “knowledge base” serves managements as an effective tool for enhancing quality, promoting proper administrative
March 15, 2010
66
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
processes, boosting efficiency, and empowering the consumer. Transforming information into knowledge and managing information in an effective way require that the database be complete, up-to-date, versatile and — most importantly — available and user friendly. Indeed, it is also possible to handle complaints without computerization, but the drawing of systemic conclusions would only be intuitive. On the other hand, a computerized database enables rapid execution of calculations and correlations, and a systematic and effective identification of specific areas in which remedial actions are required to enhance the service and assure quality. The handling of complaints entails three subprocesses: receiving the complaint, investigating and rectifying it, and deriving the correct lessons from it. We have shown how the use of this type of information system at Hadassah assists in the everyday management of complaint handling in the ombudsman’s office. At the same time, we demonstrated how computerized handling of each of the substages has focused attention on problematic areas that required action by management — action that is aimed at increasing the satisfaction of patients and preventing failures in future medical and other treatments. Our research and conclusions are applicable to public general hospitals, as part of the health system in Israel. It is our suggestion to check if the same conclusions and managerial implications are applicable to other public organizations that are not part of the medical system. References African (Banjul) Charter on Human and Peoples’ Rights (October 21, 1986). Ben Haim, A, S Schwartz, S Glick andY Kaufman (2003). The development of the ombudsman institution in health services system in Israel. Bitahon Sociali [Social Security], 64, 23, 67–82. Bismark, MM et al., (2006). Relationship between complaints and quality of care in New Zealand: A descriptive analysis of complainants and non-complainants following adverse events. Quality and Safety in Health Care (QSHC), 15, 17–22. Brigit, D (2005). Exploring common deficiencies that occur in recordkeeping. British Journal of Nursing, 14, 10; ProQuest Nursing & Allied Health Source, p. 568. Carmi, A (2003). Health Law Sarigm-leon: nero, 795–800. Council of Europe, ETS No. 164: Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine: Convention on Humasn Rights and Biomedicine, Oriedo, 4.IV.1997 http://conventions.cve.int/treaty/EN/Treaties/Html/164.htm. Dimitris, C and D Nikiforos (2004). Traditional human rights protection mechanisms and the rising role of mediation in southeastern Europe. Iyunim—The Periodical of the Office of the State Comptroller and Ombudsman, 60, 21–33. Donabedian, A (1992). Quality assurance in health care: Consumers’ role. Quality in Health Care, 1, 247–251. European Social Charter (February 26, 1965). Fooks, C, M Reclis and C Kushner (1990). Concepts of quality of care: National survey of five self-regulating health professions in Canada. Quality Assurance in Health Care, 2(1), 89–109.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 67
Gecik, K (1993). The need to protect patients’ rights. Medical Law, 12(1/2) 109. Hans, G-H (2002). The ombudsman: Reactive or proactive? Iyunim—The Periodical of the Office of the State Comptroller and Ombudsman, 59, 63–66. Hickson, GB et al., (2002). Patient complaints and malpractice risk, JAMA, 287(22), 2951–2957. International Covenant on Economic, Social and Cultural Rights (1966). Israel Ministry of Health (2006). The decade summary report of the patients representative. Report No. 7, 3–6. Israel Patient Rights Law (December 5, 1996). Javets, R and Z Stern (1996). Patients’ complaints as a management tool for continuous quality improvement. Journal of Management in Medicine, 10(3), 39–48. Leape, LL et al., (1993). Preventing medical injury, Quality Review Bulletin, 19, 144–149. Marten, O (2002). Protecting the integrity and independence of the ombudsman institution: The global perspective. Iyunim—The Periodical of the Office of the State Comptroller and Ombudsman, 59, 67–84. National Audit Office (NAO), Government of the United Kingdom (2005). Citizen Redress: What Citizens Can Do if Things Go Wrong with Public Service. London: The Stationery Office. National Health Insurance Law of (1994). Book of Laws 1469, (June 26, 1994). Nebantzel, YA (1997). Perspectives on the impact of the ombudsman. Studies in State Auditing, 56, 28. Nord, E (1995). The use of cost-value analysis to judge patients’ right to treatment. Medical Law, 14(7/8) 553. Pahlman, I, T Hermanson, A Hannuniemi, J Koivisto, P Hannikainen and P Liveskivi (1996). Three years in force: Has the Finnish act on the status and rights of patients materialized? Medical Law, 15(4) 591. Paterson R (2002). The patient’s complaints system in New Zealand. Health Affairs, 21(3), 70–79. Reich, W (ed.) (1978). Encyclopedia of Bioethics, Vol. 3, 1993 pp. 1201. New York: Resolution 48/140 on Human Rights and Scientific and Technological Progress Dec. 20 1993, pp. Sage, WM (2002). Putting the patient in patient safety, linking patient complaints and malpractice risk. JAMA, 287(22) 3003–3005. Stern, Z, E Mersel and N Gedalia (2008). Are you being served? The inter-organizational status and job perception of those responsible for patient rights in general hospitals in Israel. Harefua (in press). Steven, ID and RM Daglas (1986). A self-contained method of evaluating patient dissatisfaction in general practice. Family Practice 3, 14–19. The Cairo Declaration on Human Rights in Islam (August 5, 1990). Universal Declaration of Human Rights. UN Resolution 217(A), (Dec. 10, 1948). van der Vyver, J (1989). The right to medical care. Medical Law, 7(6), 579. Vuori, H (1991). Patient satisfaction — Does it matter? Quality Assurance in Health Care, 3(3), 183–189. Wagner, ML (2000). The organizational ombudsman as change agent. Negotiation Journal, 16(1), 99–114 (Springer Netherlands).
March 15, 2010
68
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
William, S and PE Bprenstein (2000). Baltimore’s consumer ombudsman and assistance program: An emerging public health service in medical managed care. Maternal and Child Health Journal, 4(4), 261–269.
Biographical Notes Zvi Stern, M.D. is the Director of Hadassah Mount Scopus Hebrew University Hospital in Jerusalem and an Associate Professor of healthcare administration at the Hebrew University Hadassah Medical School, both in Jerusalem, Israel, where he received his MD. His research interests include quality improvement in healthcare: concepts, methodology and assessment, errors and patient safety — the human factor. Elie Mersel, M.A., M.H.A., is the Chief Audit Executive of “MEKOROT” — Israel National Water company, and former Internal Auditor in Hadassah Medical Organization. He is a Council Member of the Institute of Internal Auditors (IIA) in Israel and a lecturer in Tel-Aviv University. His research interests include internal auditing, control environment, risk management and quality assurance. Nahum Gedalia holds an M.A. from Hebrew University, Jerusalem, and an M.P.A. from Harvard University, Boston. He worked as a Deputy and Chief Administrator of both Hadassah University Hospitals at Ein-Kerem and Mount Scopus alternately for 30 years. For the last 3 years he served as the Patient Representative (Ombudsman) for both hospitals and all other Hadassah schools and institutes.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
Chapter 4
How to Develop Quality Management System in a Hospital VILLE TUOMI Department of Production, University of Vaasa, P.O. Box 700 (Yliopistonranta 10), 65101 Vaasa, Finland
[email protected]
The objective of this study was to consider how to develop a quality system in a hospital. This is made by answering the questions: what are the situational factors that should be taken into consideration while establishing a quality system and what should be taken care of during the developing process. This study focused mainly on public hospitals. The study was a qualitative constructive study, where we try to develop a model for a development of a quality management system of a public hospital. This is done from the contingency theory’s approach and by using content analysis while analyzing the study material. As a result of the study, a model for the developing of a quality system in a hospital was constructed. The results can be generalized especially to other hospitals. As managerial implications, the model constructed in this study could be applied to other hospitals and professional service organizations, but there is no universal way to develop the QMS and so the system must be always customized to an organization. By improving the fit between the QMS and contingencies, that is issues related to customers, an organization will probably improve its outputs and outcomes. Keywords: Quality management system; hospital.
1. Introduction Quality management is traditionally seen as a universalistic management system, which means that it is assumed that there are some kinds of one best way to implement quality management. When we think about the hospitals, this may cause problems, because quality management is developed in industrial organizations and hospitals are professional service organizations. In Finland and in many other countries, hospitals are also public non-profit organizations, which may cause problems when implementing a quality management system. Therefore, there should be some kind of quality management model for hospitals, which take into consideration the situation in which the hospitals are operating. This leads us to think about quality management system from the viewpoint of contingency approach. 69
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
70 V. Tuomi
2. Quality Management Systems Quality management system (QMS) can be defined in many ways: • QMS is a formalized system that documents the structure, responsibilities, and procedures required to achieve effective quality management (Nelsen and Daniels, 2007). • Quality management system is made to direct and control an organization with regard to quality. A system consists of interrelated or interacting elements. A management system is a system made to establish policy and objectives and to achieve those objectives. A management system of an organization can include different management systems, such as quality management system or a financial management system ISO 9000:2000 (25–27). • “Quality system is agreed on company-wide and plant wide operating work structure, documented in effective, integrated technical, and managerial procedures, for guiding the coordinated actions of the people, the machines, and the information of the company and plant in the best and the most practical ways to assure customer quality satisfaction and economical costs of quality” Feigenbaum (1991, p. 14). • Quality system is an assembly of components, such as the organizational structure, responsibilities, procedures, processes, and resources for implementing total quality management. The components interact and are affected by being in the system. The interactions between the components are as important as the components themselves. To understand the system, you have to look at the totality, not just one component (Oakland, 1999, p. 98). Organizational structure may be considered as “the established pattern of relationships among the components or parts of the organization.” The structure is relatively stable or changes only slowly. The formal structure of an organization is defined as (a) the pattern of formal relationship and duties (the organization chart plus job descriptions) and (b) formal rules, operating policies, work procedures, control procedures, compensation arrangements, and similar devices adopted by management to guide employee behavior in certain ways within the structure of formal relationships. There is also informal organization which refers to those aspects of systems that are not formally planned but arise spontaneously out of the activities and interaction of participants (Kast and Rosenzweig 1970, 170– 173). As we mentioned before, a quality system consists of different components and interaction between them. Structure may also be considered as part of quality system (Nelsen and Daniels, 2007; Oakland, 1999, p. 98). In the ISO QMSs, organizational structure is defined as arrangement of responsibilities, authorities, and relationships between people. A formal expression of the organizational structure is often provided in a quality manual. An organizational structure can include relevant interfaces to external organizations (ISO 9000:2000, p. 27).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
71
The advantages of quality systems are obvious in manufacturing, but they are also applicable in service industries and public sector. When implementing a QMS, you have to use such a language, which is suitable for the organization, where it is applied (Oakland, 1999, p. 113). So in a hospital, you should use the language of the healthcare industry and integrate the QMS to the other management systems and see the system as a totality consisting of interacting components and which is coordinated and organization wide. Sometimes, researchers have evaluated maturity of the quality systems by assessing the use of quality tools. The other model of maturity evaluation is based on performance maturity levels in a 1–5 scale. In the lowest level, there is no formal approach, in the second level there is a reactive approach, the third level is stable formal system approach, in the fourth level continual improvement is emphasized and at the highest level, there exists best-in-class performance, which means strongly integrated improvement process and that best-in-class benchmarked results are demonstrated (Sower et al., 2007, p. 124). So, there should be some sort of QMS in a hospital on average. The most common QMSs are often implemented with the help of the EFQM model (EFQM 11.9.2008) and ISO 9001 quality management standard or in the case of Finland with the help of the SHQS model. The simplification of the logic of the EFQM and ISO is seen in Fig. 1. 3. Implementation of the Quality System in Former Studies What are we doing, if we are implementing a quality system? Some researchers claim that there is a difference between the total quality management (TQM) and ISO 9000, so that TQMa is more effective and practical way to improve operations of an organization (Yang, 2003, p. 94), but at the same time there are very much in common between quality management principles of the ISO 9000 and the TQM: customer focus, leadership, involvement of people, process approach, system approach to management, continual improvement, factual approach to decision making and mutually beneficial relationships (SFS-EN ISO, 9000; Magd and Curry, 2003, 252–253). The TQM could be seen as a broader approach than ISO 9000, but an organization gets best results by implementing both approaches at the same time, because they complement each other (Magd and Curry, 2003, 252– 253). Two different kinds of processes, the implementation of the TQM system and the implementation of the ISO 9000, are listed in the Table 1. It is easy to see that steps in the processes differ from each other and the first step is especially aAccording to a study concerning university hospitals in Iran, TQM requires a quality-oriented orga-
nizational culture supported by senior management commitment and involvement, organizational learning and entrepreneurship, team working and collaboration, risk-taking, open communication, continuous improvement, customers focus (both internal and external), partnership with suppliers, and monitoring and evaluation of quality (Rad, 2006).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
72 V. Tuomi The EFQM model Results
Enablers Leadership, people, policy and strategy, partnerships and resources, and processes
People results, customer results, society results, and key performance results Innovation and learning ISO 9001:2000
Continual improvement of the QMS
Management responsibility
Customer requirements
Measurement, analysis and improvement
Resource management
Input
Service realization
Satisfied customers
Product
A QMS A QMS consists of certain interrelated elements. The aim of the system is to direct and control quality.
Figure 1. The EFQM excellence model and ISO 9001.
different so that ISO starts from the customers and Yang’s TQM model from the management. First, quality systems in the healthcare industry in Finland were established during the late 1990s. In those days, quality systems were quite rare even internationally. In Finnish health care, like in most European countries, there is no real competition. Therefore, a certificate as a document is not very valuable. The main benefit results from the external assessment, which ensures a systematic approach and correct implementation of the quality system. The quality system can never be complete. It shall dynamically search for better ways to carry out the duties of the organization. In the evolution of the quality system, there may be different phases. When the system is well-adopted, importance of the formal documentation is not
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
Table 1.
73
Comparison of the Implementation Models: TQM System and ISO 9000.
Implementation model of TQM in healthcare (Yang, 2003, pp. 96–97)
1. Building commitment for management 2. Setting the management principles and quality policies 3. Installing the corrective concepts of quality to employees 4. Conducting TQM educational training 5. Understanding and fulfilling customers’ requirements 6. Proceeding continuous improvement 7. Standardizing and managing the processes 8. Promoting daily management and empowerment 9. Adjusting the style of leadership 10. Constructing the teamwork 11. Performing customer satisfaction survey and quality audit 12. Changing the organizational culture
ISO 9000 approach to develop and implement quality management system (SFS-EN ISO, 9000, p. 13): (1) Determining the needs and expectations of customers and other interest parties (2) Establishing the quality policy and quality objectives of the organization (3) Determining the processes and responsibilities necessary to attain the quality objectives (4) Determining and providing the resources necessary to attain the quality objectives (5) Establishing methods to measure the effectiveness and efficiency of each process (6) Applying these measures to determine the effectiveness and efficiency of each process (7) Determining means of preventing nonconformities and eliminating their causes (8) Establishing and applying a process for continual improvement of the QMS.
as crucial as in the beginning and it is possible to give more space for innovative planning and implementation of the quality issues (Rissanen, 2000). According to the experiences from the Kuopio University Hospital (KUH), a quality system may be regarded as laborious and restrictive if the guidelines of the standard are taken too seriously and punctiliously. The standard specifies key issues and factors which probably are important for the efficiency and the success of work in an organization, but the organization itself should find solutions for implementing the issues in a feasible and useful way. As a whole, the KUH experience shows that it is feasible to establish and maintain a comprehensive quality system in a big hospital. Without a structured guideline (for instance ISO 9001 or Quality Award), it may be difficult (Rissanen, 2000). There are successful implementations of the ISO 9000 quality standards in hospital in the Netherlands (Van den Heuvel et al., 2005, p. 367). In a longitudinal case study made in Swedish hospital, they succeeded in implementing a quality
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
74 V. Tuomi
system on a surface level, so that incident reports were written on a daily basis. However, the implementation was anyway failed in a way that there was no learning organization with reflective thinking on a place. The study brought attention to ambiguity in the organization. As a consequence of ambiguity, the staff have to conduct their work in a way that is not compatible with their understanding of their role and the best way to accomplish their work goals. This is a work situation that will probably cause an increase in sick leave. The study showed the urgent need for more successful management of work situations that are characterized by ambiguity. A quality system based on the process of sense making might serve as a panoptic system that can unite the disparate meanings and reach a collective meaning status in order to make effective decisions and a successful adaptation to change, and as a result, remove the ambiguity (Lindberg and Rosenqvist, 2005). If an organization is using quality awards or ISO 9000 standards for managing its quality, it is possible that “the tail starts to wag a dog” in a way that quality manual and self-assessment report of the award become “image” documents. This means that continuous improvement is forgotten and for example the selfassessment is not improvement-oriented but award-driven (Conti, 2007, 121–125). On the other hand, there are also successful implementations of the ISO 9000 quality standards (Van den Heuvel et al., 2005, p. 367) and quality awards in Europe (EFQM model) and in the United States (Malcolm Baldrige framework) (S´anchez et al., 2006, p. 64). Implementation of QMSs in hospital departments instead of the organizationwide implementation strategy has been successful (Francois et al., 2003, p. 47; Kunkel and Westerling, 2006, p. 131). In Spain and in other countries, the most important issues impacting the success of implementation of the EFQM were training and experience with the use of the EFQM model. Other important factors were governments’ promotion of the model and development of the guidelines for the practical application of the model (S´anchez et al., 2006, p. 64). According to a study concerning the implementation status of QCI in Korean hospitals, the use of scientific QCI techniques and quality information systems are the most critical elements that help the implementation, although structural support and an organizational culture that is compatible with CQI philosophy also play an important role (Lee et al., 2002, p. 9). According to the study concerning organizational change in a large public hospital, transforming from the traditional professional hierarchy to the organization based on the use of new clinical team involvement in the change process and supporting of the old and new identities were emphasized. This is a cultural change in the sense that professional departments were displaced in favor of clinical teams as the organization’s core operational units. In this kind of situation, the change is likely to be resisted by employees, particularly those in low status groups. The members of low status groups should be involved in the change process somehow and there should be concurrent enhancement of both old and new identities of the employees (Callan et al., 2007, pp. 448, 457, 464–467).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
75
To conclude, we can now present the important issues to take into consideration during the implementation of the QMS in a hospital. The following issues are important: • An organization gets best results by implementing both TQM and ISO 9000 approaches at the same time, because they complement each other. • It is reasonable to utilize existing models for quality management (EFQM, ISO 9001, etc.), but they should not be taken too punctiliously and they should be applied in different ways in different hospitals. • Get training and experience concerning the model you are utilizing. • Develop the guidelines for the practical applications of the model. • Government should promote the model. • Use of proper techniques and quality information systems to help the implementation. • Involve employees especially the members of the organization’s low status group in the change process somehow and enhance both old and new identities of the employees. • During the implementation of the quality system, an organization should pursue learning organization with reflective thinking on a place to decrease the amount of ambiguity in the organization. A quality system based on the process of sense making might help in reaching a collective meaning status in order to make effective decisions and a successful adaptation to change, and as a result, remove the ambiguity. Think carefully what the steps in the implementation of the QMS in the certain hospital are and in what order they should be taken: starting from the customer requirements or management commitment and starting from the hospital departments instead of the organization-wide implementation strategy has been successful. 4. Contingency Approach in QMS It is claimed that two concepts will influence the field of quality management in the next several years: organizational context and contingency theory. Organizational context refers to variables that influence the adoption of quality approaches, tools, and philosophies. Contingency theory emphasizes the fact that business contexts are unique and differences in management approaches should exist to respond to varying business needs. Business context consists of the following factors: people, processes, finance and information systems, culture, infrastructure, organizational learning and knowledge, and closeness to customers (data gathering, interaction and analysis relative to customers). An organization tries to achieve best possible outputs and outcomes in its business context, but there is normally some sort of gap between the existing and desired state of affairs. In making quality-related strategic choices, we should take into consideration both the aforementioned organizational
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
76 V. Tuomi
context (inside an organization), business context (outside on organization), and body of knowledge concerning the quality management (Foster, 2006). As we mentioned before, it depends upon the quality maturity level of an organization, that what kind of quality tools it can use. When we consider the quality techniques used in hospitals, the EFQM model can be more suitable for the top managements’ use and ISO 9001 is more applicable in the tactical or operational level (Van den Heuvel et al., 2005, p. 367). These models are worth considering in this article, because they are common ways to implement quality management (system). This is a constructive study, where we try to develop a model for a quality management system of a public hospital. This is done from the contingency theory’s approach. The essence of the contingency theory paradigm is that organizational effectiveness results from fitting characteristics of the organization to contingencies that reflect the situation of the organization. Contingencies include the environment, organizational size, and organizational strategy. Core commonalities among the different contingency theories are the following assumptions: (1) there is association between contingency and organizational structure, (2) contingency change causes organizational structural change, and (3) fit affects performance (Donaldson, 2001, pp. 1–11, see also Sitkin et al., 1994 or Conti, 2006). When we talk about the quality management and contingency approach, there are two key issues. First, quality is contingent upon the customers, but not upon the organization or its products or services. Second, quality target is continually shifting and therefore organizations must pursue rightness and appropriateness in their products or services. The key to an organization’s success rests on communication within the organization and between the organization and its environment (Beckford, 1998, p. 160). In many cases, more situational approach would be suitable for the quality management. When we consider QMS from that viewpoint, the activities (main tasks) of the system are the following: 1. Strategic policy-making process: — Based on the information on (changes in) the environment, a (quality) policy has to be developed, elaborated in the purposes/intentions for the service which is required and the way these purposes/intentions can be realized. 2. Design and development control, monitoring, and improvement actions: — Constructing the way in which the controlling, monitoring, and improving take place. — Constructing the way in which the tasks are divided among individuals and groups in the organization. — The most important coordination mechanisms (control and monitoring) in a professional service organization are the standardization of knowledge and skills and mutual adjustment and much of the control is self-control.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
77
3. Control, monitoring, and improvement: — The measure of detail on which control, monitoring, and steering of improvement take place and the frequency. — Control, monitoring, and improvement are mainly done by the professionals themselves. — Important issue is which activities should be done by the customers and how these activities can be controlled (van der Bij et al., 1998). The strategies of the Finnish social and healthcare services aim at improving the quality and effectiveness by the year 2015. The aim is, for example, to improve service quality and increase the utilizing of the evaluation and feedback made by customers and patients (Sosiaali- ja terveyspolitiikan strategiat 2015, 2006, pp. 4,5, 18). Public hospitals in Finland have the following changes going on in their operating environment: 1. Strong pressures to change are connected to the Finnish health services and to their supply, demand, and usage and the present system cannot be expected to meet these future challenges. 2. The development of Finland’s health service is determined by EU’s specifications of the pan-European welfare policy and globalization also brings challenges. 3. The coming changes in needs of the aging population are well anticipated and can cause unexpected pressures to services and this means that customers know their rights, will not automatically trust in healthcare personnel, and are more demanding. 4. There have to be some kind of priorization (every customer cannot get every service). 5. There will be more e-services. 6. There will be recruitment problems because of the large age groups transition to retirement. 7. New technology makes it possible to improve quality, productivity, efficiency and effectiveness of the services and increase expectation of the customers. 8. Number of multiproblem patients increase which means the need of multiprofessional cooperation across the organizational boundaries. 9. Every citizen’s own role and responsibility to his or her own health will increase. 10. The financing of the healthcare services comes from several different sources and the financial system must be clarified (Ryyn¨anen et al., 2004, pp. 2–9, 39, 91,92). To conclude, in this study, we try to look at the contingencies which influence the way we should implement QMS in a hospital. To do this, the following issues are important to be analyzed: (1) the QMS, which means strategic policy-making process and measurement and improvement, (2) contingencies, especially external customers (patients), and (3) possible outputs and outcomes, which can be seen by
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
78 V. Tuomi 1. Characteristics of the QMS: strategy, policy, measurement, and improvement
Fit
2. Contingencies in the environment: especially external customers, but also personnel
3. Outputs and outcome and functioning of the QMS
Figure 2. The fit between the QMS and the contingencies in the environment.
evaluating how well the QMS is serving its purposes and what are the outputs and outcomes of the system are. Based on the above mentioned, we could build a preliminary model for analyzing the contingencies of the quality management (Fig. 2). 5. Methodology and Analysis In this chapter, we use contingency approach, but the study is qualitative. This is because contingency theory offers a good way to consider what kind of situational factors should be taken into consideration while implementing a QMS, but at the same time the theory should be developed so that it takes into consideration the human actors level and not only the organization level (see Donaldson, 2001, pp. 56– 76). This is tried here by using one hospital and one hospital unit as an example in implementing a QMS. This is done by constructing a model for QMS in the hospital. The study is qualitative also because of the study subject, quality management subject, which is a vague subject given that the term quality has many dimensions (see, for example, Garvin, 1988) and a hospital is a complex organization with multiple goals (see, for example, Kast and Rosenzweig, 1970 or more lately). A qualitative approach allows researcher to deal with complexity, context and persona and their multitude factors, and fuzzy phenomena. For example, holistic case studies are applicable in these kinds of situations (Gummesson, 2006, p. 167). Qualitative methods are also very suitable for studies concerning organizational change, because they allow the detailed analysis of the change and by using the qualitative methods we can asses how (what processes are involved) and why (in terms of circumstances and stakeholders) the change has occurred (Cassell and Symon, 1994, p. 5). In this study, the research question is how to make QMS and the answer is found by analyzing the fit between the important stakeholders of the hospital (personnel and customers) and characteristics of the QMS. Research process consists of the following steps applied from the study of Lukka (2003): 1. Find a practically relevant problem which also has potential for theoretical contribution. — Our topic is to find out how to build a quality system in a hospital which is an acute problem in for example Finnish healthcare system and from the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
79
theoretical viewpoint, it is a question of how to take into consideration all situational factors and special characteristics of a non-profit healthcare organization when applying quality management, which is said to be universalistic. 2. Examine the potential long-term research cooperation with the target organization(s). — I have research agreement with the case organization and we have also made one study together. 3. Obtain deep understanding of the topic area both practically and theoretically. — In the before-mentioned study, we tried to develop the organization in practice by building a process measurement system into heart the unit of the hospital.b 4. Innovate a solution idea and develop a problem-solving construction, which also has potential for theoretical contribution. — Because of the universalistic tradition of quality management, we tried to construct a situational quality management model which is made in cooperation between practitioners and researcher.c 5. Implement a solution and test how it works. — A weak market test is made by showing the results to the quality manager of the target organization. 6. Ponder the scope of applicability of the solution. — The model is constructed in such a way that it could be used in building quality systems in the case organization and other hospitals and their units. 7. Identify and analyze the theoretical construction. — The nature of the quality management is considered in the continuum of universalistic theory vs. situational approach to quality management. — The major types of potential theoretical contributions are the novel construction itself and the applying and developing the existing theoretical knowledge about the quality management features emerging in the case. The study material consists of semi-structural interviews of the informants, audit report (SHQuality, 2007) and strategy (Vaasa Hospital District, 2003), quality policy, and strategy of the district (Vaasa Hospital District, 2007). All the material is analyzed with the help content analysis (see same kind of a study, for example Keashly and Neuman, 2008 or Kunkel and Westerling, 2006). The study material is analyzed with the help of content analysis which can be defined as “any methodological measurement applied to text (or other symbolic bAccording to Edgar H. Schein, if you want to understand an organization, try to change it. c We have cooperated in former studies and in this case study I have made an interview with a quality
manager.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
80 V. Tuomi
materials) for social science purposes” (Duriau et al., 2007). Content analyzes are most successful when they focus on facts that are constituted in language, in the uses of particular texts that the content analysts are analyzing. Such linguistically constituted facts can be put into four classes: attributions, social relationships, public behaviors, and institutional realities. Attributions are concepts, attitudes, beliefs, intentions, emotions, mental states, and cognitive processes ultimately manifest themselves in the verbal attributes of behavior. They are not observable as such. Institutional realities, like government are constructions that rely heavily on language. Content analysis of what is said and written within an organization provides the key to understanding that organization’s reality (Krippendorff, 2004, pp. 75–77). Central to the value of content analysis as a research methodology is the recognition of the importance of language in human cognition. The key assumption is that the analysis of texts lets the researcher understand other people’s cognitive schemas. At its most basic, word frequency has been considered to be an indicator of cognitive centrality or importance. Scholars also have assumed that the change in the use of words reflects at least a change in attention, if not in cognitive schema. In addition, content analysis assumes that groups of words reveal underlying themes, and that, for instance, co-occurrences of keywords can be interpreted as reflecting association between the underlying concepts (Duriau et al., 2007). See the exact description of the content analysis of the study in Appendix. 6. Case-Study in the Central Hospital of the Vaasa Hospital District The Vaasa Hospital District consists of three hospitals which all are operating under the administration of the Vaasa Central Hospital. The District is owned by 23 municipalities and it is a bilingual organization (both Swedish and Finnish speaking personnel, customers, and owners). The number of the personnel in the year 2006 was 1997, which consisted of nursing staff (1060), physicians (183), research staff (240), administrative staff (405), and maintenance staff (109). Services are offered for 166,000 inhabitants in the area of the municipalities (SHQuality, 2007). The Hospital District is one of the 20 Finnish hospital districts. 7. Analysis and Results of the Study Analysis was made mainly by using content analysis and the making conclusions into tables below. The key themesd are presented in the Table 2. Same kinds of tables are used for thematic content analysis (see Miles and Huberman, 1994, p. 132). d The audit report (SHQuality, 2007) and transcripts of semi-structures interviews were analyzed by
content analysis using former studies concerning contingency theory and quality management to form categories appropriate for this study. Then, I read texts few times and after that (with help of word processing program) the texts were categorized and edited so that the key themes were found. See the Appendix for more details.
QMS and Customers.
There is customer focus in the operation, benchmarking is utilized in development, there is positive attitude toward education in the organization, and nursing work is developed together with higher education institutions. Issues concerning for example well-being at work, personnel, and processes and quality management could be developed (SHQuality, 2007). See also the customers-column!
81
There is customer focus in the operation, but there are still some development needs, for example: service processes should be developed across the organizational boundaries, and availability of services should be developed (SHQuality, 2007). Operation is based on customer needs and the aim is to offer to customers as high-quality services as possible in cost-effective manner. The big question is to shift focus from the service and diagnosis of a single patient to the development of the service processes across organizational boundaries (semi-structured interviews). In Finland there is lack of personnel (recruitment problems), but the number of multiproblem patients’ is increasing (Ryyn¨anen et al., 2004).
b778-ch04
Measurement system must be developed in every organization level as a whole: all objectives must be in such a form that the realization is possible to measure, processes and personnel issues must be evaluated, internal audits and management review must be in use (SHQuality, 2007). There are many ways to measure customer satisfaction, but process measurement need to be improved (semi-structured interviews).
Customers
SPI-b778
Improvement
WSPC/Trim Size: 9.75in x 6.5in
Measurement
How to Develop Quality Management System in a Hospital
Good quality in Vaasa Hospital District is defined as service processes, which are from the customer’s or patient’s point of view high level, available, efficient and economic, and during which the well-being of the personnel and expectations of the stakeholders are taken into consideration. Quality work is based of the values, which are in the strategy as following: Respect of the human dignity, responsibility, and equity. In the quality strategy, there is description of the quality work based on the values and emphasis is on patients, laws and other regulation, management and strategies, risk management, cross organizational relationships and cooperation and quality system.
Contincengies
14:44
Key characteristics of the quality system Quality policy and strategy (Vaasa Hospital District, 2007)
March 15, 2010
Table 2.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
82 V. Tuomi
According to the semi-structured interviews and audit report (SHQuality, 2007), there is no TQM in the organizational level of the Vaasa Hospital District or Central hospital, but there is organization-wide quality project going on in the whole district. In some hospital units, they have their own quality manuals and QMSs, because of the characteristics of the units, for example the heart unit, cleaning services, and laboratories. The hospital is using the Social and Health Care Quality Services, SHQS, which is described in Table 3 below. It starts first from the self-evaluations in the units of the hospital and secondly, viewpoint and quality manuals are constructed in the units, and thirdly, QMS of the whole hospital district is constructed in an electronic form. From the approach of the contingency theory, the fit between contingencies and the key characteristics of the QMS has always some positive outcomes and outputs. In this study, it is easy to understand that the most important issues to improve in the hospital operations is to improve processes between all organizations for services offering to the same patient in the field of social and health care. When comparing the former Table 1 and Table 3, it can be seen that the first phase is different in every list of steps of the implementation of the QMS. Maybe methods (SHQS, ISO or others) are not so important in itself, but by choosing one of the methods, an organization can save time making the implementation easier. The development of the quality system is made partially at three levels at the same time. They are: 1. In some units, they already have their own quality manuals and QMSs and these systems will be coordinated with the SHQS, which is probably not so problematic because of the similarity of the common quality techniques (ISO 9000, EFQM, SHQS, etc.). Majority of the hospital units are developing their quality systems with the help of the SHQS. 2. At the hospital district and hospital level, the coordination of the quality management is at the level of the hospital district. 3. Quality management between the organizations in the field of social and healthcare services needs to be improved. This means that cooperation between the special health care, primary health care, social services, firms, and non-profit organizations operating is being developed. This kind of cooperation is mentioned in the quality strategy of the hospital district and is implemented in practice for example by re-organizing of the emergency duty. 8. Conclusions QMS consists of different interrelating elements which aims at directing and controlling quality. The objective of this study was to consider how to develop a quality system in a hospital. This is made by answering the questions: what are the situational factors that should be taken into consideration when establishing a quality system and what should be taken care of during the developing process. This study in focusing on public healthcare organization, but the results of the study
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
83
Table 3. A Model for Developing a Quality System in a Hospital. Implementation of the Social and Health Care Quality Service, SHQS (SHQuality 2007) 1. Starting: both management and all employees get to know the content of the SHQS-evaluation criteria and the self-evaluation method. 2. Self-evaluation: Management and all employees compare their operations to the evaluation criteria defined beforehand. 3. Development: Choosing the most important areas for development at all levels of the organization on the basis of self-evaluation and systematic development of the functionality of the service system. 4. Preparing for the audit: Agreeing upon the material to be sent beforehand to auditees, choosing the auditees, agreeing upon the timetables and informing the whole organization of the practical issues concerning the audit. 5. External audit and reporting. The evaluation is based on the SHQS-evaluation criteria. 6. Quality assurance: the separate quality council gives certification according to the audit group’s recommendation for a certain time period, if the hospital is operating according to the international quality criteria. 7. Maintaining the quality label: The organization continues development according to the principles of continual improvement. The maintaining is assured for example with the help of regular self-evaluation and internal and external auditing.
What is important in developing a quality system (according to the study material, see Appendix) — Developing and fostering quality management know-how while doing quality work, for example during the process modeling. — Concentrating more on the quality of service (availability of the service, etc.) instead of the care and diagnosis of a single customer/ patient and putting more emphasis on the patients’ welfare services as a whole in which the Vaasa Hospital District is only single service producer. This would improve quality and effectiveness of the services and borders between professions would become lower and there would be less suboptimization in the hospital. — Managing the totality of operations. — Customizing the QMS to suit yourself, for example, the language used in quality management. — Motivation and commitment at every organizational level and between organizations. — Doing quality work in a systematic way, on a daily basis and thinking long term.
could be at least partly generalized also to private organization, because differences between those organizations may not be as big as they at the first glance appear to be (see, for example, Rainey and Bozeman, 2000). Practically, all public healthcare organizations are cooperating so much with private firms that it may be sometimes even difficult to define is a healthcare service public or private. The
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
84 V. Tuomi
study is concentrating especially on Finnish and thereby European tradition of quality management by using mainly European examples. This is a constructive study, where we try to develop a model for a QMS of a public hospital. This is done from the contingency theory’s approach. The essence of the contingency theory paradigm is that organizational effectiveness results from fitting characteristics of the organization to contingencies that reflect the situation of the organization. Contingencies include the environment, organizational size, and organizational strategy. Core commonalities among the different contingency theories are the following assumptions: (1) there is association between contingency and organizational structure, (2) contingency change causes organizational structural change, and (3) fit affects performance (Donaldson, 2001, pp. 1–11; see also Sitkin, et al., 1994 or Conti, 2006). The study gives guidelines for quality management by constructing a model for the development of a quality system in a hospital. The results can be generalized to many countries because of the common roots behind different quality management models used in practice. On the other hand, the results of this study can be generalized only partially, for example because of the sample of the study. Like Lee and Baskerville (2003, p. 241), there is only one scientifically acceptable way to establish a theory’s generalizability in a new setting. A theory must survive an empirical test in that setting. Therefore, further studies concerning quality management from the contingency approach are encouraged. The studies could be both qualitative and quantitative. This study has clear managerial implications. First, the model constructed in this study (see Table. 3) could be applied to other hospitals and professional service organizations. Especially, remembering the list of important issues while developing the QMS in a hospital could be useful for every quality management project. Second, the other conclusion is that there is no universal way to develop the QMS and the system must always be customized to an organization by using on method (SHQS, ISO, EFQM, or something else) and maybe implementing TQM at the same time. By improving the fit between the QMS and contingencies, that is issues related to customers, an organization will probably improve its outputs and outcomes. References Beckford, J (1998). Quality: A Critical Introduction. London, New York: Routledge. van der Bij, JD, T Vollmar and MCDP Weggeman (1998). Quality systems in health care: A situational approach. International Journal of Health Care Quality Assurance, 11(2), 65–70. Callan, VJ, C Gallois, MG Mayhew, TA Grice, M Tluchowska and R Boyce (2007). Restructuring the multi-professional organization: Professional identity and adjustment to change in public hospital. Journal of Health and Human Service Administration (Harrisburg), 29(4), 448–477. Cassell, C and G Symon (1994). Qualitative research in work context. In Qualitative Methods in Organizational Research. A Practical Guide, C Cassell and G Symon (ed.), 1–13. London: SAGE Publications. New Delhi: Thousand Oaks.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
85
Conti, T (2006). Quality thinking and systems thinking. The TQM Magazine, 18(3), 297–308. Conti, T (2007). A history and review of the European Quality Award Model. The TQM Magazine, 19(2), 112–128. Conway, M (2006). The subjective precision of computers: A methodological comparison with human coding in content analysis. Journalism and Mass Communication Quarterly, 83(1), 186–200. Duriau, VI, RK Reger and MD Pfarrer (2007). A content analysis of the literature in organization studies: Research themes, data sources, and methodological refinements. Organizational Research Methods, 10(1), 5–34. Donaldson, L (2001). The Contingency Theory. Thousand Oaks, London, New Delhi: Sage Publications. EFQM (2008). EFQM Introducing excellence. From the internet 11.9.2008: http://www. efqm.com/uploads/introducing english.pdf Feigenbaum, A (1983). Total Quality Control. Fortieth Anniversary Edition. New York, St. Louis, San Francisco, Auckland, Bogota, Caracas, Hamburg, Lisbon, London, Madrid, Mexico, Milan, Montreal, New Delhi, Paris, San Juan Sao Paulo, Singapore, Sydney, Tokyo, Toronto: McGraw-Hill. Foster, ST (2006). One size does not fit all. Quality Progress, 39(7), 54–61. Francois, P, J-C Peyrin, M Touboul, J Labarere, T Reverdy and D Vinck (2003). Evaluating implementation of quality management systems in a teaching hospital’s clinical departments. International Journal for Quality in Health Care, 15(1), 47–55. Garvin, DA (1988). Managing Quality. New York: Free Press. van den Heuvel, J, L Koning, AJJC Bogers, M Berg and MEM van Dijen (2005). An ISO 9001 quality management system in a hospital. Bureaucracy or just benefits? International Journal of Health Care Quality Assurance, 18(4/5), 361–369. Gummesson, E (2006). Qualitative research in management: Addressing complexity, context and persona. Management Decision, 44(2), 167–176. ISO 9000 (2000). Quality management systems. Fundamentals and vocabulary. Finnish Standards Association. Kast, FE and JE Rosenzweig (1970). Organization and Management: A systems approach. New York: McGraw-Hill. Keashly, L and JH Neuman (2008). Aggression at the service delivery interface: Do you see what I see? Journal of Management and Organization, 14(2), 180–192. Krippendorff, K. (2004). Content analysis: An introduction to its methodology. Thousand Oales: Sage. Kunkel, ST and R Westerling (2006). Different types and aspects of quality systems and their implications: A thematic comparison of seven quality systems at a university hospital. Health Policy, 76, 125–133. Lee, AS and RL Baskerville (2003). Generalizing generalizability in information systems research. Information Systems Research, 14(3), 221–243. Lee, S, K-S Choi, H-Y Kang, W. Cho and YM Chae (2002). Assessing the factors influencing continuous improvement implementation: Experience in Korean hospitals. International Journal for Quality in Health Care, 14(5), 383–391. Lindberg, E and U Rosenqvist (2005). Implementing TQM in the health care service: A four-year following-up of production, organizational climate and staff well-being. International Journal of Health Care Quality Assurance (Bradford), 18(4/5), 370–384.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
86 V. Tuomi
Lukka, K (2003). The constructive research approach. In Case Study Research in Logistics, L Ojala and O-P Hilmola (eds.), pp. 83–101. Turku: Publications of the Turku School of Economics and Business Administration. Series B 1/2003. Magd, H and A Curry (2003). ISO 9000 and TQM: Are they complementary or contradictory to each other. The TQM Magazine, 15(4), 244–256. Miles, MB and AB Muberman (1994). Qualitative data anlysis: An expended sourcebook. Thousand Oaks: Sage. Nelsen, D and SE Daniels (2007). Quality glossary. Quality Progress (Milwaukee), 40(6) 39–59. Oakland, JS (1999). Total Quality Management. Text with Cases. Oxford, Auckland, Boston, Johannesburg, Melbourne, New Delhi: Butterworth-Heinemann. Rad, AMM (2006). The impact of organizational culture on the successful implementation of total quality management. The TQM Magazine (Bedford), 18(6), 606–625. Rainey, HC and B Bozeman (2000). Comparing public and private organizations: Empirical research and the power of the a priori. Journal of Public Administration Research and Theory (Lawrence), 10(2), 447–473. Rissanen, V (2000). Quality system based on the standard SFS-EN ISO 9002 in Kuopio University Hospital. International Journal of Health Care Quality Assurance, 13(6), 266–279. Ryyn¨anen, O-P, J Kinnunen, M Myllykangas, J Lammintakanen and O Kuusi (2004). Suomen terveydenhuollon tulevaisuudet. Skenaariot ja strategiat palveluj¨arjestelm¨an turvaamiseksi. Esiselvitys. Eduskunnan kanslian julkaisuja 8/2004. Tulevaisuusvaliokunta. Helsinki. S´anchez, E, J Letona, R Gonzalez, M Garcia, J Darp´on and JI Garay (2006). A descriptive study of the implementation of the EFQM excellence model and underlying tools in the Basque Health Service. International Journal for Quality in Health Care, 18(1), 58–65. Sitkin, SB, KM Sutcliffe and RG Schroeder (1994). Distinguishing control from learning in total quality management: A contingency perspective. Academy of Management. The Academy of Management Review, 19(3), 537–564. SFS-EN ISO 9000 (2001). Quality management systems. Fundamentals and vocabulary. Finnish Standards Association. SHQuality: Audit report of the Vaasa Hospital District. 17 September 2007. Vaasa (in Finnish). Sosiaali- ja terveyspolitiikan strategiat 2015 — kohti sosiaalisesti kest¨av¨aa¨ ja taloudellisesti elinvoimaista yhteiskuntaa (2006). Sosiaali- ja terveysministeri¨on julkaisuja 2006: 14. Sosiaali- ja terveysministeri¨o. Helsinki. Sower, WE, R Quarles and E Broussard (2007). Cost of quality usage and its relationhip to quality system maturity. International Journal of Quality & Reliability Management, 24(2), 121–140. Strategy of the Vaasa Hospital District 2003–2010. Vaasa: Vaasa Hospital District (2003) (in Finnish). Vaasa Hospital District (2007). Quality Policy and Quality Strategies of the Vaasa Hospital District. Vaasa: Vaasa Hospital District. (in Finnish). Yang, C-C (2003). The establishment of a TQM system for the health care industry. The TQM Magazine, 15(2), 93–98.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
87
Appendix: The Conduct of the Content Analysis Thietart, R-A. 2001, 358–360 (Content analysis) 1. Collecting the Data. 2. Coding the Data: As for any coding process, the text is broken down into units of analysis, and then classified into the categories defined according to the purpose of the research.
:
In this study Data consist of semi-structured interviews and organizational documents. Text is broken down into the following categories: — Setting up of the quality system — Strategies, quality policy, measurement, and improvement — External customers — Internal customers (personnel)
2.1. Defining the units of analysis. There are basically two types of content analysis, which can be defined according to the units of analysis defined: (1) lexical analysis analyzes the frequency with which words appear and (2) thematic analysis analyzes to adopt sentences, portions or groups of sentences as their unit of analysis. This last type is more common in organizational studies.
Thematic analysis is done by using sentences and group of sentences as the units of analysis.
2.2. Defining the categories. Depending on the coding unit selected, categories are usually described: (a) Either in the form of a concept that will include words with related meanings (for example, the category “power” could include words like strength, force, or power). (b) Or in the form of broader themes (for example, competitive strategies), which include words, groups of words or even whole sentences or paragraphs (depending on the unit of analysis defined by the researcher). The main difficulty lies in defining the breadth of selected categories. Defining the breadth of the category must be related to both the researcher’s objectives (narrow categories make comparative analysis more difficult) and the materials used. (c) In certain cases, the categories may be assimilated to a single word. (d) Finally, the categories can be characteristics of types of discourse. Qualitative analysis interpretation
Categories are described in the form of themes concerning (a) sentences which handle setting up of quality system in a hospital, (b) certain characteristics of quality management like quality policy, strategies and measurement, and improvement, and (c) key contingencies in the environment are external customers.
Interpretation of the fit between the contingencies and the QMS.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
88 V. Tuomi
The focus of the analysis in this study has been on both latent and manifest meanings. There are numerous studies in which both latent and manifest dimensions are content analyzed, such as in the case of studies focusing on mission statements. Most common content analysis techniques are frequency count, advanced features, qualitative approach. Research design can be inductive, deductive or both (Conway, 2006). Quality policy has the same kind of short document as a mission statement. In this study, we concentrate on the qualitative approach: we use mainly deductive approach. Data used in this research: • Semi-structured interviews during the Spring at the year 2008: ◦ ◦ ◦ ◦ ◦
Quality manager, the Vaasa Hospital District Medical director, the Vaasa Hospital District Head of the Heart Unit, the Vaasa Hospital District Manager of the Vaasa Hospital District Chairman of the government of the Vaasa Hospital District Themes in the semi-structured interview:
1. How would you define the quality in your organization? 2. Why do we need the quality? 3. You have implemented quality management in your organization especially with the help of SHQS/KKA. What kinds of issues belong to quality management generally/“broadly speaking”? How do you define the quality management? 4. What is the most important thing in the quality management? 5. What kinds of quality tool or techniques do you use? 6. What are the objectives for quality improvement in your organization? — Relationship of the quality objectives and other objectives in the organization. 7. Could you name one practical example of good quality work in your organization? 8. What is the role of management in achieving the before defined quality? 9. What is the role of personnel in achieving the before defined quality? 10. Is quality work/quality improvement more like everybody’s normal work or separate development work? 11. What kind of evaluation there is in your organization as a part of the quality management/quality control? 12. How do you evaluate customer satisfaction and utilize the information gathered? 13. How should we measure processes? 14. What is the meaning of the processes in producing high-quality services? 15. How should we improve the processes?
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
89
16. If you think about your organization from the quality management’s point of view, what are the strengths and weaknesses that come from your organization and opportunities and threats that come from the operational environment? 17. How well can we apply quality management developed in industrial organizations to public sector organizations? a. Extremely badly — quite badly — quite well — extremely well b. Why (explanation)? 18. Did I ask all essential questions from the point of view of quality work, or is there something that I did not understand to ask and that is essential from your point of view? Comments: The interviews were recorded and the duration of the interview varied between. The transcript of the tapes was afterward categorized and content analysis was conducted. Biographical Note Ville Tuomi is a researcher at the University of Vaasa in the Department of production. He has also worked as a teacher, trainer, and consultant in the field of management, especially quality management.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Part II Business Process Information Systems
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Chapter 5
Modeling and Managing Business Processes MOHAMMAD EL-MEKAWY∗ , KHURRAM SHAHZAD† and NABEEL AHMED‡ Information Systems Laboratory (SYSLAB), Department of Computer and Systems Sciences (DSV), Royal Institute of Technology (KTH)/Stockholm University (SU), Isafjordsgatan 39, Forum 100, 164 40 Kista, Sweden ∗
[email protected] †
[email protected] ‡
[email protected]
The purpose of this chapter is to present tools and techniques for modeling and managing business processes. For this, business process modeling is defined and classified according to two levels of detail. These categories are chained together with the help of a transformation technique, which is explained with the help of an example. As soon as the number of processes increases, they cannot be managed manually. This motivates the need for a software system called a business process management system (BPMS). The properties of a BPMS are explained, and the components of a BPMS, which support the necessary requirements of managing processes, are also presented with their advantages.Also, the major principles of business process management (BPM) are presented in this chapter. Keywords: Business process management; process modeling; business process management system; managing process models; business models.
1. Introduction The use of a business process has become an important way of representing the business and activities of an enterprise in recent years (Redman, 1998). “A business process is a set of coordinated tasks and activities, conducted by both people and equipment that will lead to accomplishing a specific organizational goal (BPM, 2008).” The management of processes is a burning issue in research these days, and it has received 2,330,000 citations in Google Scholar in the last decade. This chapter aims at presenting a business process management system (BPMS) as state-of-the-art technology for managing the processes of business. In particular, BPMS from operational perspectives will be addressed. Also, we describe business † Corresponding author.
93
March 15, 2010
94
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
process modeling as one of the important dimensions of looking at business and we discuss how BPMS can be used to manage the business of an enterprise. The rest of the chapter is organized as follows: Section 2 contains the goals of business process modeling, the classification of business process modeling and the conversion of a business model to a process model. Section 3 contains the properties of a BPMS, its components, and the uses of a BPMS. Section 4 contains the principles that can be used to integrate Information System (IS) and business process management. 2. Business Process Modeling A process is a collection of activities with clearly identified inputs, outputs, and a specific ordering of activities, whereas business process modeling is the act of representing the current state or proposed state (i.e., “as-is” or “to-be”) of functional activities for an enterprise (Frye and Gulledge, 2007). Modeling business processes enhances the analyzing and planning capabilities of an enterprise, and identifying relationships between the processes of an enterprise increases the understandability of the enterprise architecture and the relationship between the elements of a business, independent of departmental boundaries. The main goals of business process modeling are (Curtis et al., 1992; Bider and Komyakov, 1998 and Endl and Meyer, 1999): • To support business process re-engineering to deal with immense market competition. • To represent a business to understand the key mechanisms for analysis and improvements. • To provide a base for collecting business and user requirements and information system support. • To facilitate suitable strategies for software packages’ implementation. • To facilitate the alignment of the business and information technology (IT) infrastructure. 2.1. Business Process Modeling: Classification Business process modeling involves numerous techniques and methods (Curtis et al., 1992; Dean et al., 1994; Plexousakis, 1995) to analyze deeply and further scrutinize business processes (Luo and Tung, 1999). Defining a business process clearly envisages “a set of related tasks performed to achieve a defined business outcome (Luo and Tung, 1998).” They classify business processes into three basic elements: entities, objects, and activities. On the other hand, Denna et al. (1995) stated three different basic types of business process in an organization. These are acquisition/payment, conversion, and sales/collection, where the conversion process refers to converting goods or services from one form to another.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 95
Business users and technical users (IT users) have a different understanding of a business due to the different abstraction of views, levels of detail, and concerns (Andersson et al., 2008). Consequently, business and IT users do not understand the same model. Therefore, business models are developed for business users and process models are developed for IT users. From here, it can be concluded that, for an enterprise, modeling is an integration of business modeling and process modeling (Bergholtz et al., 2007). 2.1.1. Business model A business model represents the exchange of a value between business partners. The values can be resources or services and business partners are the actors participating in the value exchange. A business is also known as an economic model that depicts a value exchange between partners to represent the “what” of the enterprise (Andersson et al., 2006). A business model gives an abstract view (business view) of activities by identifying the values, activities, and partners represented by resources, exchanges, and agents in the business model. Typically, a business process consists of three components (Lin et al., 2002): customer, ongoing activities, and values that span across departmental boundaries. e3-value model. e3 value is a formalization of a business model to represent an abstract view of a business. It has value objects and value exchanges, as shown in Fig. 1. An enterprise governs some resources and the rights on those resources in a business scenario. Firstly, the value specified by an actor or a group of actors to a given resource is highlighted. Secondly, the use of resources by an actor would emphasize the rights on that particular resource. 2.1.2. Process model A process model represents the detailed activities of a business and the relationship between them. Activities are the detailed operational procedures taking place in a business. A process model is a model that depicts the operational and procedural aspects to represent the “how” of the enterprise (Andersson et al., 2006). A process model gives a detailed view (low-level view) of activities by identifying the starting point, relationship between activities, prerequisites of activities, and the finishing point. We have some process modeling perspectives subject to Actor 1
Resources/ services
Actor 2
Value
Figure 1. A generic e3-value diagram (Kimbrough and Wu, 2004).
March 15, 2010
96
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
software engineering models. Researchers have deduced the four most common perspectives (Curtis et al., 1992). Functional view −→ What activities are being performed? What data are necessary to link these activities? Behavioral view −→ When will these activities be performed? How will they be performed? Informational view −→ Representation of the process Organizational view −→ Where these activities will be performed? Who will perform these activities? 2.2. Chaining the Business and Process Models Business and process models are different views of the same business, so they must be related and derivable from each other. Also, the construction of a business model from a process model assists: (a) in capturing the essential concepts of a business process (Lin et al., 2002) and (b) in representing activities and related elements in a structured way (Bider and Khomyakov, 1998). To construct a process model from a business model, a chaining methodology has been proposed by Andersson et al. (2006). The chaining methodology is a four-phase approach, having the e3-value model as input. Here, we explain the chaining methodology (Andersson et al., 2006) with the help of a short example. The following are the main phases of deriving a process model from a business model. 2.2.1. Input: e3-value model Phase 1: Explicitly model the value exchange components and use arrows to represent each transfer between actors. Phase 2: Explicitly model the evidence document component and use arrows to represent each transfer between actors. Phase 3: Map the e3 model (value transactions and arrows) to open-EDI (ISO/IEC, 2007) phases and add the relevant process. Phase 4: Select the appropriate pattern and apply it (to the processes) to identify the internal structure of the process (Fig. 2). 2.2.2. Input To construct a process model starting from a business model, let us consider a small example of an Internet service provider (ISP). On payment, the ISP provides an Internet service to its customer. Now, we start by laying down a simple e3-value model. An actor (ISP) provides custody (of the Internet service) to another actor (customer) and gains some value (money) in response (Fig. 3).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 97
Process model
Business model
Model value exchange
Phases of chaining Business and process models
Select and apply pattern
Model evidence document
Mapping e3 and open-EDI phases
Figure 2.
Constructing a process model from a business model.
Internet service
Customer
ISP Money
Figure 3.
e3-value transfer for ISP.
Custody of internet service
Internet service
Customer
ISP Money
Figure 4.
Highlighting the custody factor.
Phase 1: Explicitly model the value exchange components and use arrows to represent each transfer between actors. In the first step, analyze the exchange of values and determine the custody factor against value exchanges. Explicitly model the custody factor of the Internet service by using arrows and lines. The number of arrows shown in Fig. 4 represents the transfer of custody from one actor to another. In this way, the transformation of the custody element can be clearly seen. The flow of custody can also be represented as a dashed line. Phase 2: Explicitly model the evidence document component and use arrows to represent each transfer between actors.
March 15, 2010
98
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al. Custody of internet service
Internet service
Customer
ISP Money
Payment certificate
Figure 5.
Highlighting evidence along with the custody factor.
When the Internet service is up and running and the money has been paid, the evidence document (payment invoice) can be transferred to the payee. In our case, the ISP can provide a payment invoice/certification to the customer to confirm that the payment has been received. Therefore, adding the payment invoice and redrawing the e3 value are shown in Fig. 5. Similarly, several actors may be involved as well in transferring the evidence document from one component to another. Phase 3: Map the e3 model (value transactions and arrows) to open-Electronic Data Interchange (EDI) phases and add the relevant process. Before going into the mapping of open-EDI phases and the e3-value model, beforehand knowledge of the open-EDI phases of a business transaction is essential. Popular and standard definitions of the phases are (Gregoire and Schmitt, 2006): • The “planning” phase is about decisions on the activities to be performed. This phase cannot be mapped to the example e3 model. • The “identification” phase is about selecting and linking the partners of the transaction. This phase cannot be mapped to the example e3 model. • The “negotiation” phase is about creating a common agreement between the transaction participants. This phase is mapped to the value exchange (Internet service and money) of the e3 model. • The “actualization” phase is about realizing the actual transaction. This phase is mapped to the custody and payment certificate of the example e3 model. Once the e3 is mapped to the open-EDI phases, add a process to each mapping, i.e., a negotiation process for the Internet service, a negotiation process for money, actualization processes for custody of the service, and another actualization process for the payment certificate.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 99 Process manipulation Monitor and performance evaluation Define process
Store PM and manage
Process execution Analysis and management
Relationship b/w BM and PM Process and goal alignment
Process simulation
Goals definition
Figure 6.
Properties of a BPMS.
Phase 4: Select the appropriate pattern and apply it (to the processes) to identify the internal structure of the process. Patterns play a central role in the development of a process model and patterns, presented by a Unified Modeling Language (UML) activity diagram, are stored and preserved for further use. A UML activity diagram contains details of the internal structure of each process. For each process in the extended e3-value model, select the appropriate pattern and apply it to find the low-level details and internal structure of each process (Gregoire and Schmitt, 2006). In our example, the payment pattern (see Fig. 6) can be used from the pattern-pool to identify the internal structure of the money payment process. 3. Business Process Management (BPM) The scope of a business process can be restricted to a department, and it may also be extended across more than one department. Therefore, interdepartmental information systems cannot manage the processes that span more than one department. BPM technology comes as a solution, and it provides the tools, technologies, and infrastructure for developing, manipulating, executing, managing, and simulating business processes. According to the Association of Business Process Management Professionals (ABPMP, 2009), Business Process Management (BPM) is a disciplined approach to identify, design, execute, document, monitor, control, and measure both automated and nonautomated business processes to achieve consistent, targeted results consistent
March 15, 2010
100
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
with an organization’s strategic goals. BPM involves the deliberate, collaborative and increasingly technology-aided definition, improvement, innovation, and management of end-to-end business processes that drive business results, create value, and enable an organization to meet its business objectives with more agility (Lusk et al., 2005).
It is not practical to build, manage, and control the business processes of an enterprise manually because an enterprise may have a huge number of processes, which can be large in size and complex in nature (Aalst et al., 2003 and Kotelnikov, 2009). In addition to that, the manual use of business processes hinders the optimization of business processes and their alignment with the enterprise goals. Therefore, we need a software system that has the capability to store, share, and execute business processes. Formally, a software system with these capabilities is called a BPMS, and is defined as: A business process management system is a software system that is used to develop, store, simulate, manage, optimize, execute, and monitor the business processes of an enterprise.
As stated in the definition, by using a BPMS, an enterprise can develop its process models and store them for further use. Once the development of process models has been completed, the BPMS can execute process models. Moreover, a BPMS has the ability to monitor and analyze business processes. Furthermore, it handles all the requests of users related to business processes. 3.1. Properties of a BPMS A number of necessary capabilities that a BPMS should have for implementing, storing, managing, and executing business processes are elaborated in this section. These properties span the complete life cycle of BPM, from modeling and simulation to analyzing and optimizing (Muehlen and Ho, 2006). The choice of properties of a BPMS is based on an extensive survey of book chapters, journal, conference, workshop, and white papers published/presented in reputable and impact-leaving forums. From the survey, a set of all possible properties of a BPMS was prepared. Finally, the obtained properties were filtered by means of a prominent BPM life cycle (Muehlen and Ho, 2006 and Netjes et al., 2006) . The main properties of a BPMS are given below (Fig. 6). Process Definition: This is the potential of a BPMS to model and develop business processes. The purpose of this property is to equip users with the ability to add and simulate both process and business models. Storage and Management: This is the potential of a BPMS to preserve process models and business models and to administer them. The purpose of this property is to provide administrative control over models.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 101
Process Manipulation: This is the ability of a BPMS to facilitate the insertion, updating, deletion, and retrieval of process models. The purpose of this property is to add the ability to implement business processes. Model Relationship: The role of a BPMS is to facilitate the development of a relationship between business and process models. The purpose of this property is to interoperate and relate process models, so that business logic can be developed. Goal Definition: The role of a BPMS is to define and store the business objectives of an enterprise. The purpose of this property is to make the system more objectiveoriented. Process and Goal Alignment: The role of a BPMS is to align process models with the goals of an enterprise. The purpose of this property is to elicit clearly the purpose of each process and to reflect explicitly which process contributes to the achievement of which goal. Process Simulation: The role of a BPMS to simulate process models. The purpose of this property is to prepare for real life changes and risk management. Process Execution: The role of a BPMS is to execute the processes of an enterprise. The purpose of this property is to make the process-based system functional. Monitoring and Performance Evaluation: The role of a BPMS is to monitor the processes that are being executed and to evaluate their performance by a predefined set of parameters. The purpose of this property is to give control over process execution and the optimal execution of processes. Analysis and Management of Processes: The role of a BPMS is to analyze and manage the processes. The purpose of this property is to keep the decision maker informed about the capabilities of process-based systems and their potential. Figure 6 shows the dependency relationship between the properties of a BPMS, i.e., the initiation of a property depends upon the completion of its prerequisites. Graphically, the property toward the tail of an arrow is the prerequisite of the property toward the head of the arrow. For example, the prerequisites of the property (process and goal alignment) are defining the process, storing, and managing the process model and goal definition. 3.2. Components of a BPMS To obtain the properties mentioned in the preceding section, the architecture of a BPMS is proposed in Fig. 7 (with major modification to Vojevodina, 2005). In the current section, we present the necessary functionalities of the components of the proposed BPMS architecture. We also illustrate the relationship between properties (discussed in the preceding section) and the proposed components. Modeling Interface: This component is used to develop a process model. It is a graphical tool that represents processes and their activities with the help of graphical
March 15, 2010
102
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al. Process developer
Process models
Business models
Metadata repository
Metadata engine
Model simulation Execution engine
Process model repository
Business model repository
Data processor
Buffer manager
Process log
Activity monitor
Rule engine
Dashboard and management tools
Modeling interface
Transaction manager
Rules repository
Process user
Figure 7. Architectural components to meet the requirements of a BPMS.
notions. The sequence of activities and information flow between these activities is either represented by sequential activities, parallel activities or by loops between activities. Moreover, similar to computer aided software engineering (CASE) tools, this component provides a drag and drop facility to develop a process model and the relationships between processes by using predefined constructs. This component delivers the first property (process definition) of a BPMS. Repositories: These are the storage spaces used to store metadata, process models, business models, business rules, and process execution logs. They are a vital component of a BPMS that takes care of all the storage-related issues and saves the mass of process models produced by the modeling interface. Repositories are dynamic in the sense that their contents can be manipulated by editing process models and business models. As soon as the models are modified, their corresponding metadata are stored in a metadata repository, if required the rules are manipulated, and this transaction is stored in the process log. This component is there partially to deliver the second property (storage and management) of a BPMS.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 103
Simulation Components: As explained by its name, this component is used to simulate business processes (process and business models). It is used to simulate the real life changes in business processes to identify bottlenecks that can be faced during implementation (Schiefer et al., 2007). By the simulation of business issues, many other things like performance and flexibility can be predicted. While using simulation tools, one can execute a process to examine its effects on other processes. The simulation results can greatly contribute to correcting real world changes, reducing the risk of implementation and increasing the ability to deploy new processes in production quickly. Activity Monitor: This component is responsible for ensuring the integrity of process models and it is used to monitor activities when a process is under execution. With the help of this component, supervisors and administrators can monitor the performance of processes by the parameters for executing, manipulating, and retrieving processes. Examples of these parameters are correct execution, consistent manipulation, and reliable retrieval, respectively. Consequently, corrections and modifications can be decided. This component partially delivers the second last property, “monitoring and performance evaluation,” of a BPMS. Data Processor: On receiving a data manipulation request, this component directly interacts with the repositories (process and business model repositories) and performs the necessary actions. It comes into action when a model is manipulated (inserted/updated/deleted), retrieved, executed, simulated, or monitored. To summarize, for all actions on process models, the data processor comes into play to ensure interaction with repositories and their consistency. Rules Engine: This component interacts with the business and process rules’ repository. It also ensures the implementation of all rules, e.g., during execution, if a process violates a business rule, the rule engine intercedes to stop the process execution. Another example could be, if a rule is violated by the data processor (during manipulation), the rule engine imposes no-manipulation of the process/business model. It ensures the correct information flow and consistency of the process models with their corresponding rules. Buffer Manager: This component handles the process models that are available in the memory during execution and it also takes care of the operations of the process log. It is then the responsibility of the buffer manager to ensure the consistency of the log and to handle blocks of pages by making them available in the cache if they are not available. An advantage of its use is that it minimizes the number of disc-accesses by finding the answer to the request from information available in the memory. Transaction Manager: This component, collectively with the buffer manager and process log, is responsible for the consistency of the process model repository. It ensures the concurrent execution of processes and the atomicity of processes during execution and during manipulation. In the case of system failure, incomplete
March 15, 2010
104
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
processes are rolled back, whereas completed (but uncommitted processes) are committed. By this component, a BPMS is made a fault-tolerant system. Metadata Engine: During execution, this component interacts with the metadata repository and provides the needed metadata to the data processor. During the process development phase of processes, this component acquires its desired metadata and stores them in a metadata repository. On process retrieval, the metadata engine uses the metadata to make the retrieval faster. Similarly, in the case of an update, metadata related to the process are collected, created and stored, and whenever required update it. Execution Engine: This component is responsible for the processes’ execution and the execution of process-manipulation requests from users. It also provides an environment for the execution of a process and activities. It provides a control mechanism for executing a business process from start to end and it manages activities from inside. During the process execution, it also manages the states of a process and the state of each process instance. It determines the process flow, keeps a record of the process output and gives it as an input for other processes. Management Tools: This component is used by the administrator to manage and control all the functionalities of a BPMS. It allows grant and revoke access rights, functionalities of users, and monitoring of different components of the BPMS. Using these components, administrators can view tasks and their associated properties. BPM Dashboard: This component provides an interface to interact with a BPMS. All the functionalities to be performed by a user are available on the dashboard. It can be customized to define the access rights of each user (Fig. 8). Figure 8 presents a BPMS as a different arrangement of components. This arrangement presents the way in which a manipulation request to a BPMS is processed. As shown above, a user interacts with the system by using a dashboard; by using a dashboard, a user can manipulate, simulate, or execute a process. The request (R) is parsed and optimized by the query engine and forwarded to the activity monitor. The activity monitor registers the request R and forwards it to the execution engine. It is the responsibility of the activity monitor to keep on monitoring the progress of R and inform the transaction manager to ensure consistent completion of the registered request. With the help of the rule engine, metadata manager, data engine, and buffer manager, the execution engine ensures the correct execution of a request. 3.3. Top 10 Advantages/Benefits of a BPMS Employing a BPMS has a number of advantages. It not only improves organizational efficiency, but it also increases control over process models by providing an integrated view of data, transparency of the execution of processes, and the addition of
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 105 BPM dashboard Query parser and optimizer Activity monitor Execution engine
Metadata manager
Metadata repository
Figure 8.
Data engine
Rule engine
Rules repository
Business model repository
Buffer manager
Process model repository
Process log
Layered approach to BPMS functionalities.
agility to the business and process refinement. The use of a BPMS has qualitative as well as quantitative benefits. It has been said that: “BPM has enabled organizations to report 10% to 15% return rates through increased efficiencies and staff time reductions, among other benefits” (http://www.cstoneindy.com/resources/articles/usingbpm/). The following are the major advantages of using a BPMS. 1. The correct development, implementation, and management of a BPMS increase forecast accuracy through analysis and process mining (Alves de Medeiros and G¨unther, 2005). Through process mining, the information available in event logs can be extracted and used to forecast accurately the behavior of a process to be executed. 2. A BPMS ensures process standardization. “Standardization is a set of methods and conditions that makes possible repeated high performance” (http://www.argi.com.my/whatispage/processStandard.htm). 3. A BPMS ensures optimum resource utilizations and improved productivity. BPM allows tremendous efficiency gains by streamlining each process end to end (Frye and Gulledge, 2007). “BPM helps optimize and improve business performance by streamlining each process end-to-end” (http://www.cstoneindy.com/resources/articles/using-bpm/). 4. The use of a BPMS improves process control, because activities can be monitored and processes can be continuously improved by using a BPMS. 5. Process simulation can be made possible and it has its own advantages. Simulation is a very important stage in the optimization of processes. By using simulation, the quality of process design can be improved, production capacity can be increased, the cost of experimenting in the real world can be
March 15, 2010
106
6.
7. 8.
9.
10.
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
decreased and real-world changes can be tested and experimented with for improvement (Serrano and Hengst, 2005). A BPMS provides an enterprise process view (integrated view). It is also called centralization of data. Data about each and every transaction are logged and can be retrieved when required. Therefore, it is possible to analyze accurately what happened. One of the main advantages of a BPMS-based business process solution is that it brings agility to a business (Silver, 2008). One of the key advantages of BPM lies in its ability to align business processes better with enterprise goals. In this way, the influence of each business process can be measured. A BPMS makes a business process absolutely transparent and it greatly improves visibility and efficiency, i.e. bottlenecks and delays can be seen and removed, as well as problem areas at each stage (http://www.enjbiz.com/BPM/benefits.html). The initial configuration and design exercise coupled with the data that emerge after running processes for some time can allow the refinement of processes.
4. Integration of Business Processes with IS In the dynamic business environment of the 21st century, fast-changing business strategies and continuously evolving technology are the norms. Having an IT strategy that is out of tune with the business processes is even more harmful to the company than not having one at all. Top management must always take the necessary steps to keep the business and IT processes and strategies of their companies running side by side or aligned, instead of having them conflicting or not meeting at all. Without this alignment, the company will not function in a competitive environment. It will be overtaken by more flexible competitors, which can jump at opportunities stemming from the continuous introduction of new technology (Luftman, 2004). Additionally, without a clear image of the “as-is” environment of different processes in the company and marketplace, the inbound and outbound components of the organization are most likely to be affected. The result can be ineffective planning, weak governance, and wasted IT resources (Camponovo and Pigneur, 2004). It is then no surprise that 54.2% of the Information Systems Managers in the Critical Issues of Information Systems Management (CIISM) report claim the importance of information systems to business processes. In the same report, the alignment takes the second place of the important factors that most contribute to the success of an organization (Pereisa and Sousa, 2004). 4.1. What is Required to Integrate? Several researchers (Beeson et al., 2002; Paul and Serrano, 2003 and Versteeg and Bouwmann, 2006) promote the value of integrating the business processes with the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 107
information systems of an organization. Information systems play an important role not only in helping an organization to achieve its business goals; they help more in creating innovative applications and competitive intelligence. To integrate business processes with IS, both IT and business teams should have a common understanding and view over their aim and the available resources. We, here, identify the most important aspects that have been highlighted by different researchers (Versteeg and Bouwmann, 2006; Trienekens et al., 2004 and Laudon and Laudon, 2007) in the pre-preparation phase of integrating business processes with IS. It has been claimed that integration success can be achieved when attention is given to: (a) understanding the “current system,” (b) “people” as actors rather than resources, and (c) “process relationships” in the new system by enterprise architecture. 4.1.1. Quality of information as a core for understanding the current system As organizations are heavily dependent on information today, information is stored in many different ways. It is therefore necessary to evaluate its quality to determine its value for a business activity. As information is stored in different systems forming information systems, it is important to determine the quality of the information systems before designing business processes. Information greatly contributes to understanding the current system by creating data classes over different parts of an organization. This helps in collecting, fixing, studying, and analyzing every part separately and results in a guideline for identifying the process. 4.1.2. People are actors rather than resources In the integration between the business process and information systems, people should be seen as actors, not as resources. Employees in the same department do not simply play the role explicitly assigned to their department or process. They should act as a communication level between both processes, with affiliation but not a bias to their department. With their formal delegations, only in some highly managerial or structured processes can people be considered as resources. Otherwise, room for informal communication processes should be left for workers to interact. 4.1.3. Enterprise architecture In today’s world that is characterized by dynamic and rapid changes, managers have sought new ways to allow them to understand the turbulent environment of different products, customers, competitors, and technologies. Enterprise architecture is one important overview of arranging an organization with its complex components. Enterprise architecture, as a term, refers to a comprehensive arrangement that links the involved departments and provides a complete view of an organization. As organizations develop different solutions, they form their systems’ infrastructure through years of work. As a result, organizations seek more control and
March 15, 2010
108
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
order in their environment. Enterprise architecture fulfills the needs for such control by representing the picture of an organization, where all elements are related and can be adjusted. Enterprise architecture also organizes the work and responsibilities between different actors and departments in the organization as well as finding suitable areas, where information systems can effectively support and cooperate with business needs (Open Group, 2003). 4.2. Principles of Managing Business Processes In this part, we have provided a list of principles that it is important to consider when managing business processes. This list is provided as a framework, which we developed based on different literature (Anupindi et al., 2004; Malinverno and Hill, 2007 and Armistead, 1996). 4.2.1. Obtaining a process champion and forming the process team A process champion can be recognized as the business owner or manager. Obtaining a champion for a process is the key to an implementation success of its plan. At the initial planning phase of any project run in an organization, the key role of the process champion is to have the main responsibility for the whole process, guiding the team process that can operate largely on its own. Whether the process champion is a small team or a single individual, the responsibility has to be established from the beginning to the end to avoid the reappearance of boundaries in processes. The process champion should have the compelling vision of the “to-be” state of his process and all related processes. He should have the credibility and good reputation for influencing the authority across various areas that are impacted on by the activities of the process. Therefore, he should have the ability to communicate his vision to all organizational levels. This principle can be more important in companies with the management of supply chains. There, the integration of the whole process from procurement to delivery is an increased success. The process owner or champion is responsible for the current business value and integrity of the process design across the functional and organizational limitations the process crosses. In addition to his own responsibilities, the champion is responsible for forming his team from current employees, switching among departments, or hiring new ones. The team should include a system architect who can design different alternatives for the process, its effects, and its relation with other processes. The champion of a process is responsible for establishing a joint process understanding for his team and, moreover, to ensure their commitment toward the process’s objectives and goals.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 109
Business plan
Procurement manufacturing
Operations
Preparation
Transport
Warehousing
Activities (Preparation)
Demand management
Customer service
Contract/ finalization
Task
Customer order
Distribution
Delivery
Core process (High-level process)
Process components (Operations)
(Demand management)
Figure 9.
Business process mapping.
4.2.2. Understanding the “as-is” situation of the business and the new process To understand the activities and processes in the workflow of an organization, the responsible team has to understand the current “as-is” situation of their and companies processes. They also have to monitor the development of the business process by mapping high-level processes, considering them as core processes. Every core process can be broken down into additional subprocesses containing detailed information that is related to different operations at several detailed levels below the core process. There, the project deals with information about input and output variables such as time, cost, and customer value: “who does what and why?” For process mapping, there are different hierarchical methods for displaying processes that help identify performance measures and opportunities for the improvement of the business process. Figure 9 shows an example of a hierarchical representation of a business process for electronic goods. This process mapping and hierarchy helps the team to analyze the different components and levels of the process. This consequently helps in allocating costs and other associated resources to different activities at the process’s levels. 4.2.3. Linking-related processes When the team members understand a process and its components, they should start identifying other processes related to their process. To look at achieving the process’s goals, the team members should relate their process to other processes within their organization or — in the organization’s supply chain — with customer- or
March 15, 2010
110
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
supplier-related processes. This relation is important in defining the flow of information, physical and non-physical resources, and people between processes. Additionally, it may be necessary to add different values concerning the organization, customers, or suppliers. The question is what are the hindrances and the problems caused between processes when it comes to the unnecessary intervention of people, information, or materials? The business processes should not include unnecessary activities and they need to be identified when things go wrong. It is about connectivity and understanding the relationship between factors of achieving good results. Some may doubt the importance of such a step, but we claim that, without knowing processes’ relationships, it is fair to perceive the work within a company as chaotic and unpredictable. It also seems out of control, and can eventually lead to degrading the process efficiency that affects product quality, provides poor customer service and finally wipes out profits. 4.2.4. Tuning the management style (oppositional management) Different styles of management tips can be adopted by a manager or the champion of a process. These styles oppose each other but in some cases they need to be adopted together or at least balanced. Discussing the functional oppositional process can be considered as a kind of trade-off relationship within a process or in the relationship among different processes. The process is helpful and constructive when it is formed from a cross-functional background. That also contributes to making decisions with clearer aspects concerning the best balance. It is not certain that organizations usually take apart functions when they move toward BPM. The causes can be that organizations are afraid of taking a huge step in one shot, or that the challenges are taken to reduce the loss of functions’ characteristics inside a business process-based structure. That is why addressing oppositional management styles is sometimes necessary when organizations make the move to a process-based business. Here, we mention some management tips; although they oppose each other, they may be needed together. • Leadership empowerment versus management-ordered control The empowerment and devolvement for the process team is suggested in process management to understand and achieve the process’s goals. This may be understood as a threat to the control of performance by managers. However, a clear need for empowerment appears when team members feel unable to accept the bigger responsibilities in their process. This case can be handled by empowerment. • Developing process knowledge versus obtaining experts As champions are responsible for forming their process’s team members, they are also responsible for developing the team’s knowledge about its process. By organizing a champion’s committee, champions in an organization can exchange knowledgeable persons in their processes. This rearrangement of people in the business
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 111
process contributes not only to the better understanding of the process, but also to what customers want. This will, maybe, not allow a continual development of expertise. In the advanced phases of the building process’s knowledge base, benefiting from experts may seem to be an over cost or overlapping event as several seniors (or beginner experts) are available for smaller tasks. • Soft-boundaries matrix versus clear structure In most organizations, a clear structure for different processes is important for employees and team members to understand the functions and tasks. It is also needed for understanding and distributing responsibilities. However, people’s motivation and performance are affected by a clear structure when they are worried about their careers. They tend to be less innovative and scared of making mistakes. In these cases, individuals find the soft-boundaries system to be more comfortable with the possibilities to be involved in more than one process as well. 4.2.5. Training and teaching others for the process Training means communicating new knowledge, skills, and changing attitudes and roles. This means a focus on enabling learning and development for people as individuals, which extends the range of knowledge development and the creation of more exciting and motivating opportunities for customers and employers. The heavy pressure organizations face moving to a process-based structure requires change in the culture of the organization, and this should be taken into consideration. A different form of leadership is required because the role of team leader also changes from being the supervisor to being the trainer or facilitator. It is practical to identify the skills needed for the process team and give names to team members according to their relevant skill or capability. A training need is the gap between what somebody already knows and what they need to know to do their job or fulfill their role effectively. To succeed with the new process and make the best profit from it, rewards and acknowledgments need to be clear depending on the targets and the goals in the service operations task for the process. In business process systems, several people in the organization lack the knowledge about different processes or lack the ability to change at the right time. Therefore, a champion or senior manager may expect to find people who need help under their supervision to supply inputs or receive output from a process. People cease working as individuals. They instead rely on each other as team members. As the aim is to move toward management by business processes, process owners and teams should play the teaching role to spread their learning. The learning process itself can be seen as a part of the communication protocols in an organization. It can be further used as a middle forum, where all department and management levels can meet and exchange their experiences, knowledge, and even documentation (Fig. 10).
March 15, 2010
112
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al. Tasks
Activities
Linkingrelated processes Obtaining Champion
Training and teaching
Understand "as is"
Forming team
Measuring the process
Periodic review
Tuning managing style
Core process
Process champion
Figure 10.
Relationship between principles.
4.2.6. Measuring the process Business processes should be first of all measurable in the sense that they can be followed, controlled, improved, and benchmarked. As businesses most of the time are concerned about profits, business process measurement should use financial and non-financial measures. Measurement should be applied between processes at the same managerial level as well as between processes and their subprocesses. Some organizations try to adopt another way of measuring by applying a bottom-up approach to aggregate the results up to the top management and business level. At every level, the key measures should be those which are used by managers at the next level to judge the results from the current level and are directly related to the customer satisfaction at this stage. Additionally, measurement can help balance the distribution of resources and control the flow of the process. Moreover, it can help prevent the re-optimization of subprocesses at the expense of the overall process by giving the process owners an indication of that in the early hours. A clear example of that can be explained by the business processes in the supply chain management. If a stage in the chain rushes the delivery to get different items together for the next business stage, it may have more delivery at the same time or achieve less delivery time. However, the expense of damaged packages will be greater and take more resources at the next stage of the process. 4.2.7. Periodic review for improving the process The development is a continuous process to keep the link between the “as-is” and “to-be” situations. Our world will never stand still. Therefore, a periodic review should be applied to the process to ensure that the initial assumptions are correct. It is also important for ensuring that plans of actions and process modifications are on schedule. Additionally, by this review, all champions, managers, and responsible persons in different processes should be informed by reports and captured images of different changes during working phases.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 113
5. Conclusion This chapter has focused on the development and management of business processes and the use of tools and technologies for this purpose. Modeling helps in re-engineering a business, analyzing business activities, and building strategies for a software package to deal with immense market competition. However, different users (business and IT users) have a different understanding of business. Therefore, two types of models (business models and process models) are developed. As both the models are different views of the same business, so they must be related and derivable from each other. For the derivation of a process model from a business model, a chaining method is presented with an example. The presence of large numbers of processes hinders the manual management of a process model. This motivates the need for a software system called a BPMS, whose properties, advantages, and components are presented. Nevertheless, for the qualitative integration of information, stakeholders, and enterprise architectures, principles of BPM are presented. Acknowledgment We would like to acknowledge the support of Paul Johannesson, Birger Andersson, Maria Bergholtz, and other members of our team. References Aalst, MP, HM Hofstede and M Weske (2003). Business process management: A survey. Springer Lecture Notes in Computer Science, 2678, 109–121. Alves De Medeiros, K and CW G¨unther (2005). Process mining: Using CPN tools to create test logs for mining algorithms. In Proceedings of the Sixth Workshop and Tutorial on Practical Use of Colored Petri Nets and the CPN Tools. 177–190, Aarkus, Denmark. Andersson, B, M Bergholtz, A Edirisuriya, J Zdravkovic, T Ilayperuma, P Jayaweera and P Johannesson (2008). Aligning goal models and business models. In Proceedings of CAiSE Forum, CEUR Proceedings, Vol. 344, 13–16, Montpellier, France. Andersson, B, M Bergholtz, B Gr´egoire, P Johannesson, M Schmitt and J Zdravkovic (2006). From Business to Process Models — A Chaining Methodology, In Proceedings of the CAISE Workshop on Business/IT Alignment and Interoperability, CEUR Proceedings Vol. 237, 216–218, Luxemburg. Anupindi, R, S Chopra, S Deshmukh, JA Mieghem and E Zemel (2004). Managing Business Process Flows: Principles of Operations Management. Prentice Hall Publishers. Association of Business Process Management Professionals. http://www.abpmp.org [8 Dec. 2009]. Armistead, CG (1996). Principles of business process management. Managing Service Quality, 6(6), 48–52.
March 15, 2010
114
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
Beeson, I, S Green, J Sa and A Sully (2002). Linking business processes and information systems provision in a dynamic environment. Information Systems Frontiers, 4(3), 317–329. Bergholtz, M Jayaweere, P Johanneson, P Wohed, P Bringin (2007). Speech Actors into UMM. In Proceedings of the 1st International REA Technology Workshop, Copenhagen, Denmark. Bider, I and M Khomyakov (1998). Business Process Modeling — Motivation, requirements, implementation, ECOOP. Springer Lecture Notes in Computer Science (LNCS), 1543, 217–218. BPM, http://searchcio.techtarget.com/sDefinition/0„sid182 gci1088467,00.html. [25 May 2008]. Camponovo, G andY Pigneur (2004). Information Systems Alignment in Uncertain Environments. In Proceedings of IFIP International Conference on Decision Support Systems, Prato, Italy. Curtis, B, MI Kellner and J Over (1992). Process Modeling. Communications of the ACM, 35(9), 75–90. Dean, DL, JD Lee, RE Orwig and DR Vogel (1994). Technological support for group process modeling. Journal of Management Information Systems, 11(3), 43–63. Denna, EL, LT Perry and J Jasperson (1995). Reengineering and REAL Business Process Modeling. In Business Process Change: Concepts, Methods and Technologies, Grover, V. Kettinger, W. (Eds.), IDEA Group Publishing, London. 350–375. Endl, R and M Meyer (1999). Potential of Business Process Modeling with Regard to Available Workflow Management Systems, SWORDIES Report no. 20, Berlin, http://www.cinei.uji.es/d2/cetile/documentos/fuentes/Model Proces Scholz 99.pdf. [11 Dec. 1998]. Frye, DW and TR Gulledge (2007). End-to-end business process scenarios. Industrial Management & Data Systems, 107(6), 749–761. Gregoire, B and M Schmitt (2006). Business service network design: From business model to an integrated multi-partner business transaction. In Proceedings of the 8th IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services, 84– 94, Washington DC, USA. ISO/IEC 15944-4 (2007). Information technology — Business operational view — Business transaction scenarios — Accounting and economic ontology. http://www.iso.org/iso/iso catalogue/catalogue tc. [1 Dec 2008]. Kimbrough, SO and Wu DJ (2004). Formal Modeling in Electronic Commerce. 1st Edn. Springer Publisher, The Netherlands. Kotelnikov, V. Business process management system (BPMS). In Meeting the Growing Demand for End-to-end Business Processes, http://www.1000ventures.com/ business guide/bpms.html [8 Dec. 2009]. Lam, W (1997). Process reuse using a template approach: A case-study from avionics. ACM SIGSOFT Software Engineering Notes, 22(2), 35–38. Laudon, JP and KC Laudon (2007). Essentials of Business Information Systems. Prentice Hall Publications. Lin, F, MYang andY Pai (2002). A generic structure for business process modeling. Business Process Management Journal, 8(1), 19–41.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 115
Luftman, J (2004). Managing the Information Technology Resource: Leadership in the Information Age. Prentice Hall, Inc. USA. Luo, W andYA Tung (1999). A framework for selecting business process modeling methods. Industrial Management & Data Systems, 99(7), 312–319. Lusk, S, S Paley and A Spanyi (2005). The evolution of business process management as a professional discipline. In Evolution of BPM as a Professional Discipline, BPTrends. Major benefits of BPM, http://www.enjbiz.com/BPMbenefits.html. [11 Dec 2008]. Malinverno, P and JB Hill (2007). SOA and BPM Are Better Together. Gartner RAS Core Research Note. http://searchsoa.bitpipe.com/detail/RES/1138808050 532.html. [8 Dec. 2008]. Muehlen, M and DT Ho (2006). Risk management in the BPM lifecycle, BPM workshops. Springer Lecture Notes in Computer Science, 3812, 454–466. Netjes, M, HA Reijers and MP Aalst (2006). FileNet’s BPM Life-cycle Support, BPM Center Report BPM-06-07. Open Group (2003). http://www.opengroup.com. Paul, RJ and A Serrano (2003). Simulation for business processes and information systems design. In Proceedings of the 2003 WSC, 2(7–10), 1787–1796. Pereira, CM and P Sousa (2004). Business and Information Systems Alignment: Understanding the Key Issues. In Proceedings of European Conference on Information Technology Evaluation Amsterdam, ECITE, Netherlands. Plexousakis, D (1995). Simulation and Analysis of Business Processes using GOLOG. In Proceedings of the ACM Conference on Organizational Computing Systems (COOCS), 311–322. Milpitas, California, USA. Redman, TC (1998). The impact of poor data quality on the typical enterprise. Communications of the ACM, 41(2), 79–82. Schiefer, J, H Roth, M Suntinger and A Schatten (2007). Simulating business process scenarios for event-based systems. In Proceedings of the 15th European Conference on Information Systems. Serrano, A and M Hengst (2005). Modeling the integration of BP and IT using business process simulation. Journal of Enterprise Information Management, 18(6), 740–759. Silver, B, BPMS watch: Agility and BPMS architecture, independent BPMS industry analyst, BPM institute, http://www.bpminstitute.org/articles/article/article/bpms-watchagility-and-bpms-architecture.html. [11 Dec 2008]. Trienekens, JM, RJ Kusters, B Rendering and K Stokla (2004). Business objectives as drivers for process improvement: Practices and experiences at Thales Naval, The Netherlands, BPM. Springer Lecture Notes in Computer Science, Vol. 3080, 33–48. Using BPM to your advantage, http://www.cstoneindy.com/resources/articles/using-bpm/. [8 Dec. 2009]. Versteeg, G and H Bouwman (2006). Business architecture: A new paradigm to relate business strategy to ICT. Information Systems Frontiers, 8(2), 91–102. Vojevodina, D, G Kulvietis and P Bindokas (2005). The method for e-business exception handling. In Proceedings of the 5th IEEE International Conference on Intelligent Systems Design and Applications. (ISDA’ 05), 203–208, Wroclaw, Poland. What is Process Standardization? http://www.argi.com.my/whatispage/processStandard. htm. [11 Dec 2008].
March 15, 2010
116
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
Biographical Notes Mohammad El-Mekawy is a teacher in the Department of Computer and Systems Science (DSV), Royal Institute of Technology (KTH), Stockholm, Sweden. He has two M.Sc. degrees from KTH, Sweden and an IT diploma from Information Technology Institute (ITI) in Cairo, Egypt. He has participated in several European projects. Also, he has years of industrial experience in both Egypt and Sweden. He has about 10 publications, presented in international forums. He is an active researcher with research interests in global and strategic IT management, process modeling, crises management, and data integration. Khurram Shahzad is a PhD candidate at the Department of Computer & Systems Science (DSV), Royal Institute of Technology (KTH), Stockholm, Sweden. He is on study leave from COMSATS Institute of Information Technology (CIIT), Lahore, where he is working as an Assistant Professor in the Department of Computer Science. Before joining CIIT he was a lecturer at Punjab University College of Information Technology (PUCIT), University of the Punjab, Lahore, Pakistan. Khurram received his Masters of Science degree from DSV, KTH, and M.Sc. in Computer Science from PUCIT. He has over a dozen publications, presented in national and international forums. Nabeel Ahmed is preparing to join the Department of Computer & Systems Science (DSV), Royal Institute of Technology (KTH)/Stockholm University (SU), Stockholm, Sweden, as a PhD candidate. Nabeel received his Master of Science degree in Engineering and Management of Information System from DSV, KTH, and Bachelor of Science degree in Computer Science from University of Management and Technology, Lahore, Pakistan.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Chapter 6
Business Process Reengineering and Measuring of Company Operations Efficiency ˇ VUJICA HERZOG NATASA Faculty of Mechanical Engineering, University of Maribor Laboratory for Production and Operations Management Smetanova ulica 17, SI 2000 Maribor, Slovenia
[email protected]
The main purpose of the presented research is to contribute to a better understanding of business process reengineering (BPR), supported with performance measurement (PM) indicators with the purpose to improve company operations efficiency. Existing literature on the subject warns about deficiencies in the concept of BPR, which can be extremely efficient with its radical workings. The concept of BPR should be studied in connection with the logical supplementary areas: manufacturing strategy and, on the other hand, performance indicators, meant for selected manufacturing strategy and BPR performance verification. BPR and PM literature is based primarily on case studies and there is a lack of rigorous wide-ranging empirical research covering all its aspects. This chapter presents the results of a survey research carried out in 73 medium- and large-sized Slovenian manufacturing companies. Seven crucial areas were identified based on a synthesis of PM literature, which must be practiced to achieve effective operations: cost, quality, time, flexibility, reliability, customer satisfaction, and human resources. Variables have been constructed within these areas, using Likert scales, and statistical validity, and reliability analyses. Keywords: Business process reengineering; process management; performance measurement; survey research.
1. Introduction Over the last 15 years, modes of operation in both manufacturing and service companies have changed considerably. We could even say that today the most important quality for a company which wants to remain successful and competitive is the ability to adapt to constant changes in the global environment. Business Process Reengineering (BPR) is classified by some theoreticians, and even practitioners, as a manufacturing paradigm, which stems from competitive environment and is commonly known as lean manufacturing, a concept of world-class manufacturing, agile manufacturing, and methods such as just-in-time manufacture, total quality management (TQM), continuous process improvement, and concurrent engineering. However, BPR is much more than just one of modern manufacture paradigms. 117
March 15, 2010
118
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
To understand the true meaning of restructuring, we must examine facts which reach far back into the past. In 1776, Adam Smith, who was actually a philosopher, economist, and radical thinker of that time, in his book titled The Wealth of Nations explained a principle that he called the division and specialization of labor which resulted in the productivity of a pin factory increasing a hundredfold. Smith’s principles were improved on in the field of manufacturing, especially by Henry Ford in the automotive industry, and by Alfred Sloan from General Motors in the field of management. Many times later, when companies and especially their management instruments become oversized and therefore almost impossible to manage, Hammer and Champy (1990, 1993) promoted their idea about the need for radical rethinking. They pointed out that the way of thinking, caused by Smith’s central idea — division and specialization of labor — and as a consequence its fragmentation, will not be enough to reach competitive advantage and efficiency in the future. The authors also examined in detail and defined the weak points which stem from division of labor, but have, at the same time, given clear guidelines on how to operate in the future. The most important idea of their work is that those processes divided for 200 years must be united again and restructured, which will make them considerably different from tradition. By focusing on processes and their restructuring, they turned upside down the industrial model which is based on a principle that workers have little knowledge and little time or abilities for additional education, which caused their tasks to be as simple as possible. On the other hand, simple tasks demanded complex-linking processes. To satisfy present demands for quality, flexibility, low cost, reliably delivery, and customer satisfaction, the processes have to be as simple as possible. The consequences of these requirements are apparent in the design of processes, and the form of organization. The field of BPR will be presented in connection with logically complementary fields, the choice of manufacturing strategy, and on the other hand, indicators which are intended for verifying the efficiency of the chosen strategies. We discovered that BPR is dynamic, designed for changes and, as such, difficult to transfer into different environments or, one could say that it is dependent on the conditions and environment where we wish to realize it. Measuring business performance has been one of the key topics of the last 10 years. Traditional criteria which were based especially on cost have become inadequate, especially because of changes in the nature of work, the rise of modern manufacturing concepts, changes in the roles in companies, new demands of business environment, and the development of information technology. Recycling and modernization of implementation measuring systems, on the one hand, refers to innovations of accountancy systems, especially regarding the treatment of expenses which are based on activities (Johnston and Kaplan, 1987), and, on the other hand, on expansion in the field of measuring the so-called non-cost measurements which are not economic or financial in nature, but come from customer needs.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 119
Existing literature on the subject warns about deficiencies in the concept of BPR, which can be extremely efficient with its radical workings. The connection of BPR with the level of manufacturing strategies solves the problem of integrating BPR and enables us to define the starting point and set clear goals. The chosen strategic goals of the company represented by competitive criteria become the goals, which can be attained by BPR. Defining clear goals enables, on the other hand, measuring efficiency or failure of implemented reengineering process, which points to successful realization of chosen or planned strategy — with this approach, the reengineering process is rounded up as a whole. 2. Business Process Reengineering Several authors have provided their own interpretation about the concept of BPR. For example, Davenport and Short (1990) have described BPR as the analysis and design of work flows and processes within, and between, the organizations. Hammer and Champy (1993) have promoted “the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality, service, and speed.” Short and Venkatraman (1992) exposed the customer’s point of view when defining BP redesign as the company’s action to restructure internal operations by improving product distribution and delivery performance to the customer. For Johansson et al. (1993), BPR is the means by which an organization can achieve a radical change in performance as measured by cost, cycle time, service, and quality, using the application of a variety of tools and techniques that focus on the business, as a set of related customer-oriented core businesses rather than a set of organizational functions. Even if the main BPR characteristic still remains in the radical nature of change, some — such as Yung and Chan (2003) — have proposed a slightly less radical approach, named “flexible BPR.” Other authors such as Vantrappen (1992) or Talwar (1993) focused on the rethinking, restructuring, and streamlining of business structure, processes, work methods, management systems, and external relationships, through which value is created and delivered. Petrozzo and Stepper (1994), on the other hand, believed that BPR involves the concurrent redesign of processes, organizations, and their supporting information systems, to achieve radical improvement in time, cost, quality, and customers’ regard for the company’s products and services. Loewenthal (1994) described the fundamental rethinking and redesign of operating processes and organizational structure; the focus is on the organization’s core competence to achieve dramatic improvements in organizational performance. Zairi (1997) discussed BPR, including continuous improvement and benchmarking, within Business Process Management, which is a structured approach to analyzing and continually improving
March 15, 2010
120
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
fundamental activities such as manufacturing, marketing, communications, and other major elements of a company’s operation. BPR also has some similarities with TQM first, the process orientation, the customer-driven inspiration, and the wide transversal nature (Schniederjans and Kim, 2003). They differ in the approach: evolutionary (continuous, incremental improvement) process change in the case of TQM, and revolutionary (radical, stepchange improvement) process change in the case of BPR (Venkatraman, 1994; Slack et al., 2001). In spite of the apparent differences in definitions given by many authors, we can extrapolate a few common, more important aspects or key words of process reengineering, which were exactly defined by Hammer and Champy (1993) in the following definition: Reengineering of business processes is a basic new consideration of the business process and its fundamental remodelling, to achieve great improvements in critical and contemporary measurements of performance, such as cost, quality, service, and speed.
3. Correlation Between Business Strategy and BPR In regard to literature review, the concept of BPR should be studied in connection with its logical supplementary areas: on one hand the manufacturing strategy, and on the other hand the performance indicators. The need for a strategically-driven BPR approach has been perceived by numerous authors (Zairi and Sinclair, 1995; Sarkis et al., 1997). Tinnil¨a (1995) ascertained that BPR should start from strategies. The desired strategic position should be the starting point for redesign, rather than improvement in existing operations. Edwards and Peppard (1994, 1998) proposed business reengineering as a natural linkage with the strategy; they suggested that business reengineering can help bridge the gap between strategy formulation and implementation. In this context, BPR is seen as an approach, which defines the business architecture, thus enabling the organization to focus more clearly on customers’ requirements. We focused specifically on manufacturing strategy, as deriving from corporate strategy, having considered manufacturing companies in our survey; however, several items about the overall strategy have been treated. 4. Performance Measurement 4.1. Definition of PM Company PM is a chapter, which is frequently mentioned but rarely precisely defined. This is actually a process of measuring effects where measuring is a process of defining value, and the effect is represented by implementation (Neely et al., 1995). According to the market viewpoint, a company is reaching the set goals,
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 121
when they are implemented in a way that satisfies the customers’ demands more efficiently and more effectively than its competitors. The terms “efficiency” and “effectiveness” are used precisely in this context. Efficiency is a measure of how economically the organization’s resources are utilized when providing a given level of customer satisfaction, while effectiveness refers to the extent to which customer requirements are met. It is a very important aspect which not only defines two basic dimensions of implementation, but also stresses the existence of external and internal influences on operation motives. The definitions can be written as • PM is a process for increasing the efficiency and effectiveness of a company’s operations. • PM is a criterion for increasing the efficiency and effectiveness of a company’s operations. • PM can be a series of criteria for increasing the efficiency and effectiveness of a company’s operations. Appropriate definitions are not required as simple in spite of the above definitions, what does PM system represent? On the one hand, it is true that PM is a series of measures for assessing efficiency and effectiveness for already-preformed processes and procedures. But the above-mentioned definition neglects the fact that the PM system also encompasses other support infrastructures. The data must be acquired, examined, classified, analyzed, explained, and announced. If we leave out any of these activities or overlook them, the measuring is incomplete, and as a consequence the adopted decisions and actions may be unsuitable. Therefore, the complete definition would be: PM enables the adoption of substantiated decisions and actions, for it assesses the efficiency and effectiveness of implementations through the process of acquiring, examining, analyzing, explanation, and announcing appropriate data.
4.2. Reasons for Change in the Field of PM There are several reasons why this area is receiving so much attention today, and why traditional financial measures are perceived as insufficient. An overview of available literature (Eccels, 1991; Neely, 1999) provides the following different content groups: • • • • • • •
Changes in the nature of work Market competitiveness Emergence of advanced manufacturing concepts Changes of roles in companies Demands of business environment Information technology development International quality awards
March 15, 2010
122
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
4.2.1. Changes in the nature of work Traditional accounting systems in particularly stress direct material and labor costs. The latter, especially in the 1950s and 1960s of the previous century, exceeded 50% of all costs. Because of large investments in advanced manufacturing technology, the share of direct labor costs decreased and, thus, also the suitability of traditional accounting systems. As direct labor cost does not represent the most important cost share, lowering costs and consequently increasing productivity do not decisively influence all the operations of a company. Narrow focus on cost lowering can cause: • • • •
Short-term effect on investment decisions. Local optimization without influences on entire operation. Focus on standard solutions and prevention of constant development. Lack of strategic focus, as data on quality, responsiveness, speed, and flexibility is neglected. • Neglecting information on market requirements. 4.2.2. Market competitiveness Economic dynamics, where only the change is constant, development of science, and high competitiveness on the market, shaken by the process of globalization, importantly influence the way efficiency is measured. Financial indicators measure predominantly the consequences of past decisions and are limited with predicting efficiency of operation in the future. Increase in competitiveness demands from companies leads to a search for an original strategic position, which is based on special resources and abilities significant for the company. Companies do not only compete with prices and costs as consequential competitive criteria, but also are trying to differentiate themselves on the basis of quality, flexibility, adaptability to customer demand, innovativeness, and quick response. 4.2.3. Emergence of advanced manufacturing concepts Study of Japanese economic growth in the 1980s and at the beginning of the 1990s overwhelms Western researchers because of the realization that Japanese companies usually define manufacturing differently. The world is presented by lean manufacturing, which hides a stack of approaches and techniques for production management. The concept of lean manufacturing in the 1990s overstepped the bounds of focusing on manufacturing and transferred to the concept of lean operations. Besides lean manufacturing, in the 1990s, other concepts emerged: TQM, BPR, benchmarking, mass customization, and concurrent engineering. Implementation of advanced business concepts helped companies simultaneously advance in the context of different competitive criteria. Efficiency of operations was not measured one-dimensionally through financial criteria. Increase in
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 123
the effectiveness and efficiency of business processes demanded multidimensional monitoring with the help of various indicators. 4.2.4. Changes of roles in companies The majority of criticism about the inadequacy of indicators for monitoring operations was given in the 1980s and 1990s by experts from academic circles who dealt with accounting (Baiman, 2008). These academic experts form the field of accounting and various professional associations increased interest in implementing non-financial indicators in systems for measuring business efficiency. Those responsible for human resource (HR) development represent another group who took a more active role in shaping indicators, and their use (Chen and Cheng, 2007). These indicators were integrated into the entire management of HRs which is composed of setting goals, measuring implementation, feedback information, and rewards. Correlation between implementation measurement and rewarding is, of course, the essence of HR management. 4.2.5. Demands of business environment Business environment cannot be limited only to competitiveness among companies. Other elements of the business environment also influence the importance of different indicators. The trend for desynchronizing the economy instigated privatization of former public companies, and the establishment of different agencies for monitoring the operations of newly established companies. Companies also face an increasing amount of pressure from the final users of products and services, united in various associations. Consumers want more information about the product or service, and also the way this product was produced. 4.2.6. Information technology development Information technology development has heavily influenced the possibility of using reengineered systems for measuring business efficiency (Marchand and Raymond, 2008). Development of hardware, software, and databases enables effective data gathering, analysis and presentation from different sources by more people, and in a cheaper and faster manner. Available information, which constantly monitors business operations, thus enables better business decisions in companies, which finally become evident in improved business results. 4.2.7. International quality awards Establishment of movements for quality and recognizing the importance of improving effectiveness and efficiency in business processes instigated the establishment of different awards for quality. The first one appeared in 1950 in Japan, the Deming Quality Award. In the United States, the Baldridge Award is highly valued. The
March 15, 2010
14:44
124
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
European Foundation for Quality Management (EFQM) gives awards for business proficiency. Companies which compete for such awards must undertake an extensive evaluation and give detailed information about their organizational strategies, resources, information flow, relationship to social issues, quality policy, and also financial results. 4.3. Review of Individual Measurements and Performance Indicators Based on the literature review, we can summarize the most important individual measurements of performance, which are as follows: • • • • • •
Quality Flexibility Time Costs Customer satisfaction Employee satisfaction or HR management
One of the fundamental problems that we face when implementing a useful PM system is trying to achieve a balance between the smaller number of key criteria (clear and simple, but which might not reflect all organizational goals), on one hand, and a greater number of detailed criteria or performance indicators (complex and less appropriate for management but able to show a lot of different possibilities of performance) on the other. In general, we can achieve a compromise by ensuring a clear connection between the chosen strategy, key parameters of performance, which reflect the main performance goals, and a series of performance indicators for individual key parameters (Slack et al., 2001). When dealing with individual performance measures, the most important fact is that they must follow from the strategy (Neely, 1998). Based on the manufacturing strategies literature review, Leong et al. (1990) conclude that generally accepted and useful key dimensions of performance are quality, speed, delivery reliability, price, and flexibility. In spite of this, there is still some vagueness about what different authors actually mean by these terms. Wheelwright (1984), for example, uses flexibility in the context of flexible extent of production. Other authors such as Garvin, Schonberger, Stalk, Gerwin, and Slack mention different dimensions for measuring key dimensions of performance. Therefore, it is almost impossible to review all performance indicators. One of the problems of PM literature is its diversity. This means that different authors focus on different viewpoints when shaping PM systems. Business strategists and managers treat measurements on a higher, different level than managers who are responsible for PMs in production. De Toni and Tonchia (2001) state that traditional measuring systems focused predominantly on production costs and productivity on the basis of changes which
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 125
Production costs
Materials and labor Machinery
“cost” Total Productivity Performance measures
Time “noncost” Flexibility
Capital (fixed and working)
Specific
Production (labor productivity, machinery saturation, inventory and WIP level)
Internal
Run and set-up times
External
Wait and move times
System times
Supplying lead times Manufacturing lead times Distribution lead times
Delivery speed and reliability Time to market Produced quality
Quality
Perceived quality In-bound quality Quality costs
Figure 1.
Performance measures. (Adopted from De Toni and Tonchia, 2001.)
come from competitive environment, were reshaped into two types of measurements (Fig. 1): Cost PMs; including production costs and productivity. These costs display clear correlations, which can be treated in mathematical form when we get final results of the company, which is its net income and profitability. Non-cost PMs; without direct cost connection which are gaining importance. Non-cost performances are usually measured with non-monetary units, which do not enable a direct link to economic and financial statements (net income and profitability) in the exact manner that is characteristic for performance connected to cost, for example delivery date, shorter than 3 days, or a higher quality products (for which we use 5% less) have undoubtedly a positive influence on economic and financial performance, but this cannot be expressed in an incremental manner by net income and/or profitability. The main goal of the presented review is developing a system of indicators for estimating the reengineering of business processes, as the tool for implementing radical changes in a company, with the aim of meeting those guidelines, provided by strategy. Using an extensive survey research of Slovene companies, we tried to develop a system of indicators which would be able to assess the success of the implemented reengineering. For this purpose, we had to study the measurement systems according to individual performance indicators and then again try to determine the connections and accuracy of the proposed theoretical model. The
March 15, 2010
126
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
following chapters show the review of performance indicators on the basis of propositions from different authors and contributions from theory, to the final shape and selection that we have given to companies in the questionnaire, when performing survey research.
4.3.1. PMs, cost-based Development of accounting management is, among others, very well documented by Johnson (1975, 1983). His work reveals that the majority of accounting systems which are used today are based on assumptions that were made 60 years ago. Actually, Garner’s (1954) review of the accounting literature indicates that the majority of the so-called sophisticated cost accounting theories and practices were developed around 1925 (for example, return on investment — ROI). Johnston and Kaplan (1987) stress that, due to dramatic changes in business environments that had occurred over the last 60 years, accounting systems are based on premises that are no longer valid. One of the mostly widely criticized practices is the allocation of indirect labor and overheads according to direct labor cost. In 1900, direct labor cost represented the majority of product costs. Therefore, it is prudent to allocate overhead cost to the product in accordance with its labor extent. With the increasing use of advanced manufacturing technologies, today direct labor costs are regarded as 10%–20% of product costs, while overhead costs represent 30%–40% (Murphy and Braund, 1990). This means high burden of overhead costs influences cost structure greatly, with a relatively small change in the content of direct labor product cost. Moreover, the distribution of overhead costs, in accordance with direct labor hours, stimulates managers to focus on minimizing the number of direct labor hours which are prescribed in their cost centre, and with this they neglect overhead costs. Johnston and Kaplan (1987) prove that these problems will only increase in the future, when the life-cycle time becomes shorter and thus the continuous increase in the share of total product costs will overtake the share of overhead’ costs intended for research and development. As a result of criticisms connected to traditional management accounting, Cooper (1988) developed an approach known as “activity-based costing” (ABC). ABC overcomes a lot of management accounting’s traditional problems, such as management accounting has become distorted by the needs of financial reporting; in particular, costing systems are driven by the need to value stock, rather than to provide meaningful product costs. In the majority of manufacturing companies, the share of direct labor, as the percentage of total cost, has decreased but is still by far the most common basis of loading overheads onto products. Overhead costs are not only a burden that must be minimized. Overhead functions such as product design, quality control, customer service, production planning, and sales order processing are as important to the customer as the physical processes
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 127
on the shop floor increase in complexity. Production processes are more complex, product ranges have expanded, product life-cycles are shorter, and quality is higher. The marketplace is increasingly competitive. In the majority of sectors, global competition has become a reality. Every business should be able to assess the true probability of the sectors it trades in, understand product costs, and know what drives overhead. Cost management systems should support process improvements, and the PMs should be connected to strategic and commercial objectives. In 1985, Miller and Vollman pointed out that, despite many managers focusing on visible costs, e.g., direct labor, direct material, etc., the majority of overheads are caused by invisible transaction costs. Cooper (1988) also in one of his earliest works warned about this and one of his major discoveries that support ABC was that, more than by just the product itself, the costs are caused by activities which are required for the production and delivery of the product. Later, it became clear that the major benefit of ABC is process analysis. This is in accordance with the concept of business processes reengineering, which offers a view of information according to transverse (horizontal) and not vertical flows in a company. The other cost-based PM which is very extensively researched in literature is productivity. Traditionally, it is defined as a relationship between total output and total input (Burgess, 1990). Productivity, therefore, measures how well resources are combined and used to accomplish specific, desirable results (Bain, 1982). Ruch (1982) cite that higher productivity can be achieved through several different methods: • • • • •
Faster increase of output in comparison to input (growth management) Producing higher output with the same level of input (rationalizing work process) Producing higher output with lower input (ideal) Maintaining the level of output at lowering of input (higher efficiency) Lowering of output level with even lower input levels (decrease management)
Different problems arise when measuring productivity, not only when defining inputs and outputs, but also when estimating their amounts (Burgess, 1990). Craig and Harris (1973) propose that companies focus more on total rather than partial productivity measurements. To define the most typical partial cost measurements, called measurement indicators, cost-based, we will take a look at propositions from different authors. Hudson et al. (2001) define the following as the critical dimensions of costbased performance: • • • • • •
Cash flow Market share Cost reduction Inventory performance Cost control Sales
March 15, 2010
14:44
128
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
• Profitability • Efficiency • Product cost reduction Performance indicators cost-based, according to De Toni and Tonchia (2001), are divided into: • Material cost Material and labor cost • Labor cost • Machinery energy costs • Machinery material consumption Machine operation cost • Inventory and WIP level • Machinery saturation • Total productivity • Direct labor productivity • Indirect productivity Production cost • Fixed capital productivity • Working capital productivity • Value-added productivity • Value-added productivity/employee Some typical PM measurement cost-based, according to Slack and Lewis (2002): • • • • • • •
Minimum delivery time/average delivery time Variance against budget Utilization of resources Labor productivity Added value Efficiency Cost per operation hour
Neely et al. (1995) propose the following categories as the critical dimensions of cost measurements: • • • • •
Manufacturing cost Added value Selling price Running cost Service cost
4.3.2. PMs, time-based Time has been described as both a source of competitive advantage and the fundamental measure of PM. According to JIT manufacturing philosophy, just-tooearly or just-too-late production or delivery of goods is seen as a waste. Similarly, one of the objectives of optimal production is minimization of throughput times
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 129
(Goldratt and Cox, 1986). Galloway and Waldron (1988,1989) developed a cost system based on time, also known as throughput accountancy; it is based on the following premises: 1. Manufacturing units are an integrated whole whose operating costs are estimated highly in the short term. It is more suitable and much simpler to consider the entire costs, without material as fixed ones, and name them “total factory costs”. 2. For all companies, profit is a function of the time required to respond to the needs of the market. This means that profitability is inversely proportioned to the level of inventory, as the reaction time itself is the function of all inventories. 3. Relative profitability of the product is defined as the level at which the product contributes money. This is also a level where the product contributes money, comparable to the level regarding use of money in the company, and defines absolute profitability. Galloway and Waldron (1988, 1989) believe that these contributions should be measured as a share, with which the money is received, and not as an absolute value. Therefore, they defined the relationship of accounting flow as income per work hour, separated from cost per work hour: Return per factory hour = Cost per factory hour =
sale price − material costs time on the key resources
(1)
total factory cost total time available on the key resources
(2)
Moreover, House and Price (1991) recommend the use of a Hewlett-Packard return map for monitoring the effectiveness of a new-product development process. Fooks (1992) reports that Westinghouse used similar cost-time profiles for more than a decade. The basic idea is that any set of business activities or processes can be defined as a collection of costs over time. An interesting approach for designing time-based PMs is proposed by Azzone et al. (1991). According to their findings, companies that wish to use time for competitive advantage should use a series of measurements. Let us review the partial PMs, time-based, as different authors propose. The various types of performance, time-based, are fundamentally divided into performances, which are implemented (De Toni and Tonchia, 2001): 1. Inside the company: • Work time and preparation of work (travel, preparation, and finishing times) • Waiting and transport times 2. Outside the company: • System time (time for delivery, production, and distribution) • Speed of delivery and reliability of delivery (customers and suppliers) • Time to market (time, required for new product development)
March 15, 2010
14:44
130
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
Furthermore, we can enumerate the indicators of external and internal time performance: • Time to market • Lead times distribution External times • Delivery reliability • Supplying lead times • Supplier delivery reliability • Manufacturing lead times • Standard run times Internal times • Actual flow times • Wait times • Set-up times • Move times Externally-internal times • Inventory turnover • Order carrying-out times −→ External times Slack and Lewis (2002) propose the following typical partial time measurements: • • • • •
Customer query time Order lead time Frequency of delivery Actual versus theoretical throughput time Cycle time
Critical dimensions of time performance according to Hudson et al. (2001): • • • • • • • • •
Lead time Delivery reliability Process throughput time Process time Productivity Cycle time Delivery speed Labor efficiency Resource utilization
Neely et al. (1995) propose the following typical partial measurements, time-based: • • • • •
Manufacturing lead time Rate of introducing production Delivery lead time Due-date performance Frequency of delivery
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 131
4.3.3. PMs, flexibility-based Although the area of flexibility over the last 15 years has generated a lot of literature, there is still some vagueness. Vagueness concerning the concept of flexibility represents a critical obstacle to competitive abilities in effective management performance (Upton, 1994). Definitions of flexibility, which can be found in literature, are divided mainly into two ways: • To definitions which are directly linked to a company • To definitions which arise from general definitions of flexibility, and can be found in other scientific fields Although measuring flexibility in academic circles and among managers is of great importance, these kinds of measurements are still under development — particularly because flexibility is a multidimensional term and because there are usually no indicators that can be obtained by direct measuring (Cox, 1989). Proposed measurements are somewhat naive and general. In spite of the need, there are no generally or widely accepted measuring methods. The robustness of proposed measurements is hardly researched (Chen and Chung, 1996). Direct, objective flexibility measurements are very hard to put into practice. Examples of such measurements are estimating the possibility of a certain moment — a decisive viewpoint and analysis of certain output characteristics. In the area of direct measurements, there are also direct subjective measurements, which are based on the Likert scale. For different angles of flexibility, we give opinions which represent a degree of agreement/disagreement with given statements. Due to problems that arise with direct definition of performance flexibility, different authors propose the use of indirect indicators which take into account: 1. Characteristics of manufacturing system, which enable flexible production and can be: • Technological (for example, availability of excess production capacity, existence of preparation time, etc.) • Organizational and managerial (for example, improvement/increase of work and team work, etc.) 2. Performance, which is, in a way connected to flexibility, and can be: • Economical (cost and value) • Non-cost based — not connected to costs (time for product development, delivery time, quality, and services). Because flexibility can be treated in several dimensions, partial measurements are especially appropriate for measuring the flexibility in manufacturing systems. In this case, we must be familiar with unification procedures which include all important individual indicators that take into account different kinds of flexibility (Tonchia, 2000).
March 15, 2010
14:44
132
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
To define the most typical partial-flexibility measurements, also called “flexibility indicators”, we will review the different authors’ propositions. Slack and Lewis (2002) propose the following typical partial flexibility measurements: • • • • • • •
Time required for developing new products/services Range of products/service Machine change-over time Batch size Time to increase activity rate Average capacity/maximum capacity Time to change schedules
Hudson, et al. (2001) propose, as critical flexibility measurements, the following dimensions: • • • • • • •
Manufacturing effectiveness Resource utilization Volume flexibility New product introduction Computer systems (IT) Future growth Product innovation
De Toni and Tonchia (2001) propose the division and, thus, also measuring of the following types of flexibility: • • • • •
Volume flexibility Mixed flexibility Product modification flexibility Process modification flexibility Expansion flexibility
Neely et al. (1995) propose the following as typical partial flexibility measurements: • • • • • • • •
Material quality Output quality New product development Modify product Deliverability Volume Mix flexibility Resource mix
4.3.4. PMs, quality-based Traditionally, quality has been defined in terms of conformance to specification and, therefore, the quality-based measurements of performance generally focus
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 133
on measurements such as the number of defects produced, and the cost of quality. Feigenbaum (1961) was the first to propose that the true cost of quality is a function of the prevention, appraisal, and failure costs. Campanella and Corcoran (1983) defined three types of cost: Prevention costs are costs, which are used for the purpose of preventing inconsistencies such as quality planning costs, survey of supplier quality, and education costs. Appraisal costs are costs, which are used for the purpose of estimating product quality and to define discrepancies like monitoring costs, testing, and calibration control or dimension control. Failure costs are costs, which are used for correcting discrepancies and are usually divided as follows: • Internal failure costs; these are costs that arise before delivery to the customer, such as cost of repairs, waste, and material examination. • External failure costs are costs that arise regarding the delivery of goods to the customer, such as costs connected with processing customer complaints, customer refunds, maintenance, and warranties. Crosby’s (1972) claim that “quality is free” is based on the assumption that any increase in prevention costs is more than offset by a decrease in failure costs. Quality costs are measured as special costs, which arise in a company, for they are usually higher or lower than the performance. Usually they represent 20% of the net price. Crosby warns that the majority of companies made a mistake by integrating the quality-costs model within the management process. This means that, even if managers estimate the quality cost, they lack appropriate activities for lowering them. With the emergence of TQM, the emphasis has shifted away from “conformance to specification” and toward customer satisfaction. As a consequence, a larger number of surveys on customer satisfaction and market research have emerged. This reflects the emergence of the Malcolm Baldridge National Quality Award in the United States and the European Quality Award in Europe. Other common measures of quality include statistical process control (SPC) (Deming, 1982; Price, 1984) and the Motorola six-sigma concept. Motorola is one of the world’s leading manufacturers and suppliers of semiconductors. In 1992, the company set the goal in the area of quality to meet the six-sigma capacity (3.4 errors per million parts). The last two measurements are especially important for the design of PM systems, because they focus on process and not on output. De Toni and Tonchia (2001) defined the following indicators of performance quality: • • • • •
SPC measures — achieved quality Machinery reliability Reworks — quality costs Quality system costs In-bound quality
March 15, 2010
14:44
134
• • • • • • • • •
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
Vendor quality rating Customer satisfaction — Quality perception Technical assistance Returned goods To sum up Production quality Internal quality Quality costs Quality perception (understanding market demands) External quality Delivery quality
Some of the typical partial quality measurements as proposed by Slack and Lewis (2002): • • • • • •
Number of defects per unit Level of customer complaints Scrap level Warranty claims Mean time between failures Customer satisfaction score
Hudson et al. (2001) propose the following as the critical performance quality dimensions: • • • • •
Product performance Delivery reliability Waste Dependability Innovation
Neely et al. (1995) propose the following indicators as typical PMs, pertaining to quality: • • • • • • • • • •
Performance Features Reliability Conformance Technical durability Serviceability Aesthetics Perceived quality Humanity Value
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 135
4.3.5. Dependability Some of typical partial-dependability measurements, as proposed by Slack et al. (2002): • • • • •
Percentage of orders delivered date Average lateness of orders Proportion of products in stock Mean deviation from promised arrival Schedule adherence
4.3.6. Measuring customer satisfaction To measure customer satisfaction Hudson, et al. (2001) propose the following critical dimensions: • • • • • • •
Market share Service Image Integration with customers Competitiveness Innovation Delivery reliability
4.3.7. Measuring employee satisfaction Human capacity or human resources (HR) are surely the most important resources of every company. Often, two equally large companies which are involved in similar activities and work in the same environment achieve substantially different business results. The reasons can be numerous, but the difference is usually a consequence of different work abilities of employees or different quality of HR. Knowledge about the value of HR is not new. Even the pre-classic economists were aware of its value and treated a person as an integral part and a source of national wealth. These realizations matured in time, but human capacities today only rarely find their place in accounting statements. Some of the critical dimensions of measuring employee satisfaction as proposed by Hudson et al. (2001): • • • • • • • •
Employee relationships Employee involvement Workforce Learning Labor efficiency Quality of work-life Resource utilization Productivity
March 15, 2010
136
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
5. Research Methodology The consequences of the change termed BPR can be perceived in companies all over the world including Slovenian companies (Herzog et al., 2006, 2007; Tennant, 2005). An exploratory survey research methodology was taken up when considering the presented problem. This performed research was the first large-scale study carried out in Slovenia on this theme. The research was divided into three phases: (i) A wide-ranging analysis was conducted, of the existent literature aimed at determining the major dimensions of BPR. (ii) A questionnaire was designed to investigate the real BPR, pre-tested on experts and pilot-firms (as suggested by Dillman, 1978), and later sent by post to the General and Plant/Production Managers responsible or participating in the BPR project. This questionnaire contained 56 items, designed according to the Likert scales. (iii) The resulting data were subjected to reliability and validity analyses, and then analyzed using uni- and multivariate statistical techniques. 5.1. Data Collection and Measurement Analysis The research was carried out in 179 Slovenian companies within the mechanical industry and 90 Slovenian companies within the electromechanical and electronic industries. The criterion for the choice of sample was the size of the company. We limited it to medium- and large-sized companies, because the complexity of the BPR activities is more distinctive in these companies. According to the Slovenian Companies Act (Ur. L. RS nr. 30/1993), companies are divided into small, medium and large, according to the number of employees, respectively, less than 50, from 50 to 249, from 250 upwards; and on revenue respectively less than 0.83 million, from 0.83 to 3.34 million, from 3.34 million EUR upwards. The response rate was very good for the post-contact methodology (27.14%), and showed that firms were interested in the subject. The subsequent statistical analysis was, therefore, carried out on the results of those 73 companies, which returned the questionnaires correctly filled in. Of the 73 companies analyzed, 53 belong to the mechanical and 20 to the electromechanical industries. To indicate the degree or extent of each item, as practiced by their business unit, a five-point Likert scale (Rossi and Wright, 1983) was used, ranging from “strongly disagree” to “strongly agree”. In determining the measurement properties of the constructs used in the statistical analysis, reliability and validity were assessed (Dick and Hagerty, 1971), using respectively Cronbach’s alpha and principal components analysis (PCA).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 137
5.1.1. Reliability Reliability has two components (Flynn et al., 1990): stability (in time) and equivalence (in terms of the means and variances of different measurements of the same construct). The main instruments for reliability assessment are the test–retest method (for stability) and Cronbach’s alpha (for equivalence) (Cronbach, 1951). We concentrated on the second aspect, because these variables were being developed for the first time. All of the multiitem variables have a Cronbach’s alpha of at least 0.6383 (for single variables 0.6030), otherwise most of the multiitem variables have Cronbach alpha greater than 0.7 or even 0.8, well exceeding the guidelines set for the development of new variables (Nunnally and Bernstein, 1994). 5.1.2. Validity The validity of a measure refers to the extent to which it measures what it was intended to measure. Three different types of validity are generally considered: content validity, criterion-related validity, and construct validity. Content validity cannot be determined statistically but only by experts, and by referring to literature. Criterion validity regards the predictive nature of the research instrument to obtain the objective outcome. Construct validity measures the extent to which the items in a scale all measure the same construct. We derived content validity from two extended reviews of recent literature about BPR. O’Neil and Sohal (1999) exposed six main dimensions of BPR on the basis of a review of over 100 references covering the period from the late 1980s to 1998. They concluded that the empirical research in BPR has been lagging and it presents the academic community with a considerable opportunity. Rigorous, empirically based research can help in demystifying the confusion that still exists concerning BPR and simultaneously enables better understanding of manufacturing company’s function. Another source was an extended literature study based on 133 references (selected from a start of 900) performed by Motwani et al. (1998). These authors identified four main research streams in the BPR area and determined defectiveness and directions for further research. To establish criterion validity, each item of the questionnaire was critically reviewed by five academics in operations management at the University of Maribor (Slovenia) and the University of Udine (Italy), and also by three general managers from different manufacturing companies. Following the pre-tests of the items, 142 items remained appropriate for conducting research. Of the different properties that can be assessed from measurements, construct validity is the most complex and, yet, the most critical to substantive theory testing. A measurement has construct validity if it measures the theoretical construct or trait that it was designed to measure. Construct validity can also be established through the use of PCA.
March 15, 2010
138
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
At this point PCA was carried out to uncover the underlying dimensions, eliminate problems of multicollinearity (Belsley et al., 1980) and, ultimately, reduce the number of variables to a limited number of orthogonal factors. First, each multiitem variable was factor-analyzed separately: for the items loaded on more than one factor, the items responsible for the other factors beyond the first were eliminated (or considered in another variable) and Cronbach’s alpha was re-calculated. The presented variables are all in their final version. A similar procedure was then adopted to group several variables to get a more manageable set of variables without surrendering too much information. Rotation was applied to aid interpretation. For interpretation of the factor loading’s matrix, only loadings superior to 0.5 were considered (except in a few cases where a variable is transverse to several factors): imposing such a limit allows the retaining of only those variables which contribute in a high degree, to the formation of a given factor, called according to the name of the variables with higher factor loadings. 6. Performance Indicators for BPR Evaluation From the survey and comparison of theoretical models of PMs in existing literature, we gained an insight into the entire extent of the field. One of the basic problems that we face when implementing a useful PM system is an attempt to achieve a balance between the lower number of key PMs (clear and simple, but might not reflecting all organizational goals) on the one hand, and a greater number of detailed measurements or performance indicators (complex and less appropriate for management, but able to show many different possibilities of performance), on the other. Generally, we reach a compromise by ensuring a clear connection between the chosen strategy, key performance parameters, which reflect main performance goals, and a series of performance indicators for individual key parameters. When dealing with individual PMs, the most important aspect is the fact that they must come from the strategy. Measuring can be a qualification process, but its main aim is to instigate positive working and, as Mintzberg pointed out, this strategy can be realized only by consistency between operation and performance. As the most important contribution of the survey, we can highlight the development of the system of indicators for BPR evaluation. Figure 2 shows the PM system of BPR, which we developed on the basis of real information from the companies which have gone through this process; we based it on the method of questionnaires. To form new variables, we used methods which are not as widespread and come from the scientific field of psychometrics. When forming the variables, we used a measurement instrument which we thoroughly examined from the aspects of reliability and validity. Thus, we can say with certainty that the newly developed variables are empirically based, and thus reliable and valid. The first subfield was designated for forming new variables for cost assessment in reengineering. When verifying reliability and validity, we designed new combined variables which then, on the basis of coefficient of variation, classified
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 139
System of indicators for estimating reengineering
Figure 2.
Costs
Material costs Work and maintenance costs Inventory costs Total productivity Money flow Costs of new product development
Quality
Internal quality External quality
Time
Internal company time External time
Flexibility
Product flexibility Process flexibility General flexibility
Reliability
Delays Product inventory Employee reliability
Customer satisfaction
General indicators of customer satisfaction Direct cooperation with customer
Human resources
Absence from work Workforce qualification Promotion and character development of employees Working experiences
Performance indicators for BPR evaluation.
according to importance. Between different types of costs in a company, the questioned persons attribute the greatest importance to the group or total productivity. According to the coefficient of variation, defined as a ratio between standard deviation and mean value of survey research results, opinions in companies about total productivity were very uniform. Total productivity is then followed by material costs, labor costs and services, and cash-flow. If we try to connect the results of the study with the findings of numerous other authors, we can discover that productivity as a PM correlated to costs is most widely treated in literature. Here arises the question: is this of great importance that the questioned people attribute to productivity measurement, perhaps a consequence of wider studying and promoting of productivity in literature? Here, we must not forget that the field of productivity, which is traditionally defined as the relationship between total output and total input, still generates problems, not only defining outputs and inputs but with their amount assessments (Burgess, 1990). Craig and
March 15, 2010
140
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
Harris (1973) propose that companies should rather focus on total measurements instead of partial productivity measurements. This idea was also adopted by Hayes et al. (1988) who discovered how companies could measure total productivity. Regarding De Toni and Tonchia’s (2001) research, we can deduce that traditional measurement systems, focusing especially on production costs and productivity, on the basis of changes that stem from a competitive environment, were reengineered, especially in the direction of measurements which they call PMs, without direct cost-correlation. These measurements are becoming increasingly important. Non-cost performance is usually measured as non-monetary units; therefore, a direct correlation with economic and financial statements in the exact method is impossible, which is the distinctiveness of cost-based performance. In the following text, we will present discoveries gathered from the questionnaire using the sequence used in the questionnaire. Quality was examined as the first solution, which is not in direct correlation with cost. Traditionally, quality has been defined in terms of conformance to specification and, therefore, the quality-based measurements of performance generally focus on measurements such as the number of defects produced, and the cost of quality. Quality costs are measured as special costs that arise in a company for they are usually higher or lower than when implemented, and commonly represent 20% of net price share. When defining costs, a question arises — does optimal quality level actually exist? In the field of PM, the most appropriate viewpoint is proposed by Crosby. He warns that the majority of companies made a mistake by integrating a model of quality costs with the management process. This means that, even if managers estimate the quality cost, they lack appropriate activities for their lowering. With the emergence of TQM, the stress from adapting to detail moved to the direction of customer satisfaction. As a consequence, a larger number of surveys on customer satisfaction and market research have emerged. This reflects the emergence of Malcom Baldridge National Quality Award in the USA and the European Quality Award in Europe. Other common measures regarding quality include SPC (Deming, 1982; Price, 1984) and Motorola’s six-sigma concept. The two latter measurements are especially important for the design of PM systems, for they focus on process and not on output. On the sublevel of quality, we designed, on the basis of survey results, two new united variables — internal and external quality. The questioned persons attribute greater importance to external quality, which includes in bound quality, customer satisfaction, quality perception, and delivery reliability. A somewhat lesser importance is attributed to internal quality, which includes the level of rework, warranty claims, costs of rework, and costs of the quality system. Time is described as the source of competitive advantages and also as the basic PM.According to JIT production philosophy, early and late production or delivery is shown as a loss. Similarly, one of the goals of optimal production is minimizing flow
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 141
times (Goldratt and Cox, 1986). Galloway and Waldron (1988, 1989) developed a cost system based on time, known also as flow accountancy. The participants believe that time in the company, like time for machine preparation, waiting times, transport times, and inventory circling are very important for company operations. Although measuring flexibility in academic circles and among managers is of great importance, these types of measurements are still being developed, especially because flexibility is a multidimensional term and because there are usually no indicators which we could obtain by direct measurements (Cox, 1989). Direct, objective flexibility measurements are very difficult to implement in practice. Due to problems which arise at direct definition of performance flexibility, different authors propose the use of indirect indicators. In this case, we must be familiar with procedures for unification which include all important individual indicators, which take into account different kinds of flexibility. The synthesis of measurement must include clear rules on including individual (elementary) and united measurements and elementary data which must be perfect, homogenous (related), and in the appropriate phase to be united optimally. The results of descriptive statistics in the subfield of flexibility show that companies attribute greater importance to general flexibility, including changing timepossibilities, reaction to customer demands, and product innovations. As the most important measurement in the subfield of reliability, participants from companies pointed out delays particularly — a share of orders performed too late and average order delays. The low value for the coefficient of variation shows the uniformity of opinions regarding this issue. Somewhat lesser importance is given to reliability of employees, but the opinions on this vary considerably. In the subfield of customer satisfaction, participants were in agreement about the great importance of direct cooperation with customers; they also attributed great importance to other general customer satisfaction indicators. The results of descriptive statistics in the subfield of HR pointed out as the most valuable characteristic that influences the efficiency of the company — the employees’ education. The opinion of participants about the importance of employee education is very uniform in all mid-sized and large companies. HR (employees) are surely the most important resource of every company (Milost, 2001). Often, two equally large companies which are involved in similar activities and work in the same environment achieve substantially different business results. The reasons can be numerous, but the difference is usually a consequence of the different work abilities of employees or different quality of HR. The problem which arises is contributed to the fact that the work abilities of employees are not shown in classical balance sheets. Accounting gives individual events in company operations a value statement. The accounting thus shows only those means and obligations connected to resources, which can be expressed in value. The result of this approach is that HR, the highest quality and most important means of a company, are not shown in balance statements.
March 15, 2010
142
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
This does not necessarily mean that the quality of HR in a company is treated as something unimportant. The positive contribution of employees is usually mentioned at the presentation of business results. But a few dry sentences cannot express their real contribution to successful business operations. At the end of this chapter, guidelines for following up research must be mentioned, especially on the basis of findings that surfaced during research implementation. Due to the mentioned problems with measuring employee abilities, a very interesting field of research is opening where it would be favorable to study and develop a series of subjective measurements for measuring the abilities of employees, and for measuring employee integration in companies. The option to further study correlations between individual newly developed variables is still open. References Azzone G, C Masella and U Bertel`e (1991). Design of performance measures for timebased companies. International Journal of Operations & Production Management, 11(3), 77–85. Baiman S (2008). Special double issue on the use of accounting data for firm valuation and performance measurement. Review of accounting studies, 13(2–3), 167–167. Bain D (1982). The Productivity Prescription — The Manager’s Guide to Improving Productivity and Profits. New York: McGraw-Hill. Belsley DA, E Kuh and RE Welsch (1980). Regression Diagnosis: Identifying Influential Data and Source Collinearity. New York: John Wiley & Sons. Burgess TF (1990). A review of productivity. Work Study, January/February, 6–9. Campanella J and FJ Corcoran (1972). Principles of quality costs. Quality Progress, April, 16–22. Chen IJ and CH Chung (1996).An examination of flexibility measurements and performance of flexible manufacturing systems. International Journal of Production Research, 34(2), 379–394. Chen CC and WY Cheng (2007). Customer-focused and product-line-based manufacturing performance measurement. International Journal of Advanced Manufacturing Technology, 32(11–12), 1236–1245. Cooper R (1988). The rise of activity-based cost systems: Part II — When do I need an activity based cost system? Journal of Cost Management, 41–48. Cox T (1989). Towards the measurement of manufacturing flexibility. Production & Inventory Management Journal, 68–89. Craig CE and CR Harris (1973). Total productivity measurement at the firm level. Sloan Management Review, 14(3), 13–29. Cronbach LJ (1951). Coefficient alpha and the internal structure of tests, Psychometrika, 16, 297–334. Crosby PB (1972). Quality is Free. New York: McGraw-Hill. Davenport TH and JE Short (1990). The new industrial reengineering: Information technology and business process redesign. Sloan Management Review, 31(4), 11–27. De Toni A and S Tonchia (2001). Performance measurement systems, models, characteristics and measures. International Journal of Operations & Production Management, 21(1/2), 46–70.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 143
Deming WE (1982). Quality, Productivity and Competitive Position. Cambridge: MIT. Dick W and N Hagerty (1971). Topics in Measurement: Reliability and Validity. New York: McGraw-Hill. Dillman DA (1978). Mail and Telephone Surveys: The Total Design Method. New York: John Wiley & Sons. Eccles RG (1999). The performance measurement manifesto. Harvard Business Review, 69(1), 131–137. Edwards C and J Peppard (1994). Forging a link between business strategy and business reengineering. European Management Journal, 12(4), 407–416. Edwards C and J Peppard (1998). Strategic Development: Methods and Models. New York: Jossey-Bass. Feigenbaum AV (1961). Total Quality Control, New York: McGraw-Hill. Flynn BB, S Sakakibara, RG Schroeder, KA Bates and EJ Flynn (1990). Empirical Research Methods in Operations Management. Journal of Operations Management, 9(2), 250–285. Fooks JH (1992). Profiles for Performance: Total Quality Methods for Reducing Cycle Time. Reading, MA: Addison-Wesley. Galloway D and D Waldron (1988). Throughput accounting part 1 — The need for a new language for manufacturing. Management Accounting, November, 34–35. Galloway D and D Waldron (1988). Throughput accounting part 2 — Ranking products profitability. Management Accounting, December, 34–35. Galloway D and D Waldron (1989). Throughput accounting part 3 — A better way to control labour costs. Management Accounting, January, 32–33. Galloway D and D Waldron (1989). Throughput accounting part 4 — Moving on to complex products. Management Accounting, February, 40–41. Goldratt EM and J Cox (1986). The Goal: Beating the Competition. Hounslow: Creative Output Books. Hammer M and J Champy (1990). Reengineering. Work: Don’t Automate, Obliterate, Harvard Business Review, 68(4), 104–112. Hammer M and J Champy (1993). Reengineering the Corporation: A Manifesto for Business Revolution, Harper Business. Hayes RJ, SC Wheelwright and KB Clark (1988). Dynamic Manufacturing: Creating the Learning Organisation. New York: Free Press. Herzog NV, A Polajnar and P Pizmoht (2006). Performance measurement in business process re-engineering. Journal of mechanical engineering, 52(4), 210–224. Herzog NV, A Polajnar and S Tonchia (2007). Development and validation of business process reengineering (BPR) variables: A survey research in Slovenian companies. International Journal of Production Research, 45(24), 5811–5834. House CH and RL Price (1991). The return map: Tracking product teams. Harvard Business Review, January–February, 92–100. Hudson M, A Smart and M Bourne (2001). Theory and practice in SME performance measurement systems. International Journal of Operations & Production Management, 8(8), 1096. Johansson HJ, P McHugh, and J Pendleburv WA, (1993). Wheeler Business Process Reengineering: Breakpoint Strategies for Market Dominance. New York: John Wiley and Sons.
March 15, 2010
144
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
Johnson HT (1975). The role of history in the study of modern business enterprise. The Accounting Review, July, 444–450. Johnson HT (1983). The search for gain in markets and firms: A review of the historical emergence of management accounting systems. Accounting Organisations and Society, 2(3), 139–146. Johnston HT and RS Kaplan (1987). Relevance Lost — The Rise and Fall of Management Accounting. Boston, MA: Harvard Business School Press. Leong GK, DL Snyder and PT Ward (1990). Research in the process and content of manufacturing strategy. OMEGA International Journal of Management Science, 18(2), 109–122. Loewenthal JN (1994). Reengineering the organization: A step-by-step approach to corporate revitalization. Quality Progress, 27(2). Marchand M and L Raymond (2008). Researching performance measurement systems — An information system perspective. International Journal of Operations & Production Management, 28(7–8), 663–686. Milost F (2001). Raˇcunovodstvoa cˇ loveˇskih zmoˇznosti, ISBN 961-6268-59-7. Motwani J, A Kumar, J Jiang and M Youssef (1998). Business process reengineering, a theoretical framework and an integrated model. International Journal of Operations & Production Management, 18(9/10), 964–977. Murphy JC and SL Braund (1990). Management accounting and new manufacturing technology. Management Accounting, February, 38–40. Neely A (1998). Measuring Business Performance. London: The Economist in Association with Profile Books Ltd. Neely A (1999). The performance measurement revolution: Why now and what next? International Journal of Operations & Production Management, 19(2), 205–228. Neely A, M Gregory and K Platts (1995). Performance measurement system design: A literature review and research agenda. International Journal of Operations & Production International Journal of Operations & Production Management, 18(9/10), 964–977. Nunnally JC and IH Bernstein (1994). Psychometrics Theory, 3rd Edn. NewYork: McGrawHill. O’Neil P andAS Sohal (1999). Business process reengineering:A review of recent literature. Technovation, 19, 571–581. Petrozzo DP and JC Stepper (1994). Successful Reengineering, New York: Van Nostrand Reinhold. Price F (1984). Right First Time. Aldershot: Gower. Rossi PH JD and Wright AB (1971). Anderson, Handbook of Survey Research. New York: Academic Press. Ruch WA (1982). The measurement of white-collar productivity. National Productivity Review, Autumn, 3, 22–28. Sarkis J,A Presley and D Liles (1997). The strategic evaluation of candidate business process reengineering projects. International Journal of Production Economics, 50, 261–274. Schniederjans MJ and GC Kim (2003). Implementing enterprise resource planning systems with total quality control and business process reengineering. International Journal of Operations & Production Management, 23(4), 418–429. Short JE and N Venkatraman (1992). Beyond business process redesign: Redefining Baxter’s business network. Sloan Management Review, 34(1), 7–21. Slack N and M Lewis (2002). Operations strategy. Pearson Education Limited.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 145
Slack N, S Chambers and R Johnston (2001). Operations Management, 3rd Edn. London: Pearson Education Limited. Talwar RR (1993). Business re-engineering — A strategy-driven approach. Long Range Planning, 26(6), 22–40. Tennant C (2005). The application of business process reengineering in the UK. The TQM Magazine, 17(6), 537–545. Tinnil¨a M (1995). Strategic perspective to business process redesign, Business Process Management Journal, 1(1), 44–59. Tonchia S (2000). Linking performance measurement system to strategic and organizational choices. International Journal of Business Performance Measurement, 2(1/2/3). Upton DM (1994). The management of manufacturing flexibility. California Management Review, 36(2), 72–89. Vantrappen H (1993). Creating customer value by streamlining business processes. Long Range Planning, 25(1), 53–62. Venkatraman N (1994). IT-enabled business transformation: From automation to business scope redefinition. Sloan Management Review, Winter, 73–87. Wheelwright SC (1984). Manufacturing strategy — defining the missing link. Strategic Management Journal, 5, 77–91. Yung WK-C and DT-H Chan (2003). Application of value delivery system (VDS) and performance benchmarking in flexible business process reengineering. International Journal of Operations & Production Management, 23(3), 300–315. Zairi M (1997). Business process management: A boundaryless approach to modern competitiveness. Business Process Management Journal, 3(1), 64–80. Zairi M and D Sinclair (1995). Empirically assessing the impact of BPR on manufacturing firms. International Journal of Operations and Production Management, 16(8), 5–28.
Biographical Note Dr. Natasa Vujica Herzog is an Assistant Professor in the Laboratory for Production and Operations Management at the Faculty of Mechanical Engineering in Maribor (Slovenia). She received her M.Sc. and Dr. Sc. degrees in Mechanical Engineering at the Faculty of Mechanical Engineering, Maribor, in 2000 and in 2004. She is author of more than 70 referred publications, many of them in international journals, scientific books and monographs. Her research area is operations and production management, in particular, business process reengineering (BPR), performance measurement (PM), lean manufacturing (LM) and Six-Sigma. She acquired further knowledge and research experience at two other European universities. The University of Udine, Italy granted her a three months scholarship for research work with Prof. Stefano Tonchia in Department for Business & Innovation Management. She spent several months at the University of Technology, Graz, Austria, working with Prof. Wohinz at the Institute for Industrial Management and Innovation Research. She is a member of the Performance Measurement Association (PMA) and the European Operations Management Association (EUROMA).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Chapter 7
Value Chain Re-Engineering by the Application of Advanced Planning and Scheduling YOHANES KRISTIANTO∗ , PETRI HELO† and AJMAL MIAN‡ University of Vaasa, Department of Production, P.O. Box 700, 65101 Vaasa, Finland ∗
[email protected] †
[email protected] ‡
[email protected]
The general purpose of the chapter is to present a novel approach to value chain reengineering by utilizing the new concept of Advanced Planning and Scheduling (APS). The methodology applies collaboration among suppliers, buyers, and the customers to fulfill orders. The models show that it is possible to re-engineer the value chain by incorporating the supply side (suppliers) and demand side (customers) within the new concept of APS. A problem example is given to show how to implement this concept by emphasizing important aspects of supplier and customer relationship. This concept, however, does not take into account the importance of service and customer interface and transport optimization; hence the customer requirement effect cannot be measured. In terms of managerial implication, this chapter maintains that the value chain should incorporate procurement and product development into the main value chain activities since both the activities are more actively in communication with customers. The innovation of this chapter is in including product commonality and response analysis in the simulation model. Keywords: Value chain; advanced planning; supply chain management; scheduling; managerial flexibility; market share.
1. Introduction Meeting customer requirements by customizing the manufacturing strategy is one of the strategic goals which challenge supply chain managers over time. The need for customization has been replacing the current trend of manufacturing in industry which has continued since the 1990s, where mass production has been shifting to mass customization by featuring the competitive landscape at for instances, process re-engineering and differentiation, which forces the manufacturer to be more flexible and adopt a quicker response (Pine, 1993). However, this trend has been slowly adopted by up to 60% of the research articles that were published just after 2001–2003 (Du et al., 2003). There have been about 60,000 hits during this period. Furthermore, the current trend of mass customization is shown by the 147
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
148 Y. Kristianto et al.
emerging of the personalization concept instead of customization (Kumar, 2008; Vesanen, 2007). The most recent authors mentioned that nowadays, the firm needs to be different not only in manufacturing but also in marketing by satisfying the cumulative requirement of price, quality, flexibility and agility at affordable price, by applying information and operational technologies. This trend, however, forces the firm to re-engineer its value chain in order to meet the requirement. Pine (1993) proposed four types of value chain re-engineering based on customization stages differentiation. In general, differentiation is categorized according to product and service standardization or customization. A higher customization degree in the value chain processes leads to quick response manufacturing. The idea, however, followed Porter’s value chain concept without making breakthrough with the new phenomena of mass customization. Originating from this idea, this chapter applies advanced planning and scheduling (APS) to customize the value chain from back end (supply) to front end (demand). 1.1. Value Chains and APS The value chain as a chain of activities gives the products more added value than the sum of added values of all activities (see Fig. 1) (Porter, 1985). It is important to maximize value creation by incorporating some support activities: for instance, technology development and procurement. Added value is created by exploiting the upstream and downstream information flowing along the value chains, and firms may try to bypass the information to an automated decision maker to create improvements in its value system. Related to value chain re-engineering, this chapter develops a new model of value chain by referring to the hierarchical planning tasks ofAPS. The reason behind this decision is that both the Michael Porter value chain and strategic network
Support activities Firm infrastructure Human resources Technology development Procurement Inbound logistics
Operations
Outbound logistics
Marketing and sales
Primary activities
Figure 1.
Michael Porter value chain model.
Services
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering Operations
Inbound logistics
Outbound logistics
Marketing and sales
149
Services
Primary activities procurement
Figure 2.
production
distribution
sales
Michael Porter value chain model and APS decision flow.
Collaboration
Marketing and sales
Product development
Operations Service
Purchasing
Figure 3.
Proposed value chain model.
planning of APS model has the same vision of creating added value across order fulfillment processes. The relationship can be described in Fig. 2. From the relationship, this chapter studies a new model of value chain, as follows. Figure 3 depicts the new concept of value chain, starting from marketing and sales to product development and procurement. New product development receives information from marketing and at the same time, back-end operations (purchasing department) coordinate the operations and suppliers simultaneously to fulfill customer demands by optimizing capacity. This model spreads customer information directly to two different sides, the external relation (the suppliers) and internal relation (the manufacturer). This model applies collaboration to improve the customer value by using dynamic material planning. Different from the traditional approach, this model collaborates in every product fulfillment process to synchronize the supply and production capability on a real time basis, according equal benefit of the manufacturer and the supplier. This value chain is then continued to distribution and transport planning, which optimize the entire supply chain by choosing the best distribution channels and transportation. Related to the APS, Fleischman (2002) describes the hierarchical planning task (see Fig. 4), which, at a glance, figures out the application of value chains from the strategic to the short-term level. The details are represented in Fig. 5 by incorporating the support and the primary value chain activities as follows. Figure 4 describes task deployment from strategy (long-term planning) to operations (short term), which is detailed further by developing the structure of the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
150 Y. Kristianto et al. Long-term aggregate, comprehensive
Mid-term
Short-term detailed
Figure 4.
Hierarchy of planning tasks (from Fleischman et al., 2002).
hierarchical planning tasks from Supply Chain Planning Matrix (Stadtler, 2005). The authors propose the two collaboration interfaces of customers and suppliers, as depicted in Fig. 5. Related to the mass customization issue, this situation supports supply chains to be more flexible by assessing each function’s core competence within supply chains and finding the possibility to develop strategic sourcing instead of in-house manufacturing. In this chapter, we propose an APS methodology to create a link between internal and external operational planning within supply chains to possibly the collaboration between APS (Fig. 5). Unfortunately, this opportunity is less supported by the previous APS function since as it is characterized as follows: 1. In practice, APS is usually concentrated on managing production planning and scheduling by using sophisticated algorithms. Figure 5, however, ignores the collaboration between supplier’s available-to-promise (ATP) and buyer’s
Sales
Procurement
Production
Distribution
Sales
Procurement
Strategic networks planning Master planning
Demand planning
Demand fulfillment and ATP
Purchasing and material requirement planning (MRP)
Collaboration
Figure 5.
Production planning
Scheduling
Demand planning Distribution planning
Transport planning
-
Demand fulfillment and ATP
Purchasing and MRP
Collaboration
Collaboration between APS (from Meyr, 2002).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
151
Material Requirement Planning (MRP) by assuming that supplier has infinite production capacity, assumes that lead times are fixed and ignores the production schedule and sequence (Chen and Ji, 2007). 2. In addition to MRP and scheduling synchronization, APS does not allow the possibility for activity outsourcing and manufacturing strategy customization. Instead, this chapter proposes optimized push-pull manufacturing strategy as well as sourcing strategy optimization. The advantages of this approach are that the manufacturer can reduce the production traffic by outsourcing some activities, as well as promising delivery promptness by using promised lead times in the ATP module and making collaborative material planning where the supplier and buyer production schedule are synchronized according to production capacity. 3. Integration with Agile Supply Demand Network (ASDN) adds benefit to this APS model by its ability to reconfigure the supply chain network and to measure the value of the order by financial analysis. Figure 6 represents the APS scheme to show the difference between new and existing APS. This new APS model is developed to represent value chain re-engineering. Concurrent engineering is shown by customer and supplier involvement in the process. R&D is included in purchasing and customer involvement is included in order to describe supplier responsibility for product design. At the same time, MRP is excluded from the model to represent dynamic material planning. As a replacement, we use collaborative material planning in order to emphasize supply synchronization.
Procurement
Production
Distribution
Strategic networks planning
Master planning Information flow
Production planning
Collaborative material planning
Scheduling
Demand planning Decision flow Demand fulfillment and ATP
Physical flow Information flow
Figure 6.
Proposed APS model.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
152 Y. Kristianto et al.
Instead of this new approach, this chapter is composed according to the logic of common APS. First there is a discussion of APS, an introduction at a glance (Sec. 1.2). From internal coordination, demand planning is discussed in Sec. 2.1, which informs master planning (Sec. 2.2) to enable ATP (Sec. 2.3) by fulfilling the promised lead time (Sec. 4.3.1) as well as inventory level (Sec. 4.3.2) and optimizing production sequence and schedule (Sec. 4.4). From external coordination, material planning (Sec. 4.5) and network planning (Sec. 4.6) are also optimized. Moreover, APS is able to optimize supply strategy (Sec. 4.2.2) as well as the product development process (Sec. 4.2.4). The key feature of this APS is on profit optimization for the entire supply chain by making a simulation through ASDN software (Sec. 4.6). 1.2. APS Advanced Planning and Scheduling (APS) could be defined as a system and methodology in which decision making, such as planning and scheduling for industries, is federated and synchronized between different divisions within or between enterprises in order to achieve total and autonomous optimization. Unlike other available systems, APS simultaneously plans and schedules production based on available resources and capability. This usually provides a more realistic production plan (Chen and Ji, 2007). APS is generally applied where one or more of the following conditions are satisfied: • • • • • •
Made to order manufacturing instead of make to stock The products require a large number of components or tasks to be manufactured A capital intensive manufacturing process where capacity is limited Products competing with each other to avail the resources Unstable situations for resource scheduling that cannot be planned beforehand It requires a flexible manufacturing approach
Advanced Planning and Scheduling (APS) improves the integration of materials and capacity planning by using constraint-based planning and optimization (Chen, 2007; van Eck, 2003). There are some possibilities to include suppliers and customers in the planning procedure and thereby optimize a whole supply chain on a real-time basis. APS utilizes planning and scheduling techniques that consider a wide range of constraints to produce an optimized plan (van Eck, 2003, for example): • • • •
Material availability Machine and labor capacity Customer service level requirements (due dates) Inventory safety stock levels
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
153
• Cost • Distribution requirements • Sequencing for set-up efficiency Furthermore, in the area of supply chain planning, there has been a trend to embed sophisticated optimization logic into APS that helps to improve the decisions of supply chain planners. If it is used successfully, it is not only supports supply chain strategy, but also improves the competitiveness of a firm significantly. Some areas of possible improvement are listed below (Stadtler, 2002): • • • •
Competitiveness improvement Make the process more transparent Improve supply chain flexibility Reveal system constraints
Furthermore, Fleischman et al. (2002) mention three main characteristics of APS, which are: 1. Integral and comprehensive planning of the entire supply chain from supplier to end customer. 2. True optimization by properly defining alternatives, objectives, and constraints. 3. A hierarchical planning system from top to bottom that requires cooperation among various tasks in the entire supply chain. 2. Architecture of Proposed APS With regard to the needs for personalization in the whole value chain, this chapter tries to fill the gap between the requirement and the existing APS by looking forward to finding some benefits as follows: 1. Within value chain building, the most important thing is how to maximize value for customers. This report supports the requirement by proposing reconfigurable push-pull manufacturing strategy. This strategy can adapt to Bill-of-Materials (BOM) changes by reconfiguring the push-pull manufacturing strategy (front side). In order to support the strategy, this APS also optimizes the product commonality to minimize the inventory level as well as production lead times (back side). 2. Within e-customization, the customer meets directly with the manufacturer. The issue which appears is how to minimize customer losses (time and options) and at the same time manufacturer losses (overhead costs, for instance extra administration cost, order cost, etc.). This APS model can minimize both burdens by offering an optimum design platform to the customer and the suppliers and a reasonable inventory allocation using push-pull manufacturing strategy (Fig. 7).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
154 Y. Kristianto et al. ERP (Sales) 1. Bill of material 2. Order lead times 3. Order locations
APS 1. Demand planning 2. Master planning 3. Distribution and transport planning
SC Execution planning Total inventory value Total profit Total lead times
Figure 7. APS model connection to ERP and SC Execution Planning (SCEP).
With regard to the integration issue, this APS module can be composed as follows. The details of the architecture are elaborated as follows: 2.1. Demand Planning Before going ahead with any production planning process, it is important to calculate the level of demand within a company. Wagner (2002) explored the three main parts of demand planning, namely forecasting, what-if analysis, and safety stock calculation. The purpose of forecasting is to produce a prediction related to future demands. What-if analysis is used as a risk management tool to determine the safety stock level. This ensures the company’s proper utilization of space and minimizing the costly inventory level. It also brings integrity to the company’s supply chain and logistics network. Demand planning necessitates forecasting and what-if analysis is conducted to make the optimal calculation of required inventory and safety stock level. This chapter, however, comprises an order-based APS where forecasting is only conducted within the push manufacturing strategy. 2.2. Master Planning Master planning is used to balance supply and demand by synchronizing the flow of materials within the supply chain (Meyr et al., 2002). Capacity decision from demand planning will be used to setup product and material price, manufacturing strategy by considering lead times and inventory availability from ATP and possible suppliers’ capability from collaborative material planning. Furthermore, master planning is also supported by receiving production schedule information from production planning and scheduling module (see Fig. 8). 2.3. ATP ATP is used to guarantee that customer orders are fulfilled on time and in certain cases, even faster. The logic is shown in Fig. 9. Figure 9 shows three customers who want different requirements and are situated at different locations. ATP optimizes resource assignments such as materials, semi-finished goods (sub-assembly), and production capacity to guarantee that all
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
155
Strategic network planning 1. Agile supply demand networks 2. Transportation optimization 3. Distribution center optimization
Demand planning Production capacity
Decision flow
Information flow
Master planning 1. Push-pull manufacturing strategy 2. Supply strategy 3. Product and material price 4. Design strategy
Collaborative material planning 1. Dynamic material order 2. Warehouse stocks 3. Inventory requirement
Figure 8.
Available to Promise (ATP) Promised lead times
Production planning and scheduling 1. Production sequence 2. Production schedule
Decision and information sequence within APS.
Customer
1
Resources 1. Material 2. Sub-assembly 3. Production capacity
2 3
Figure 9. Available-to-Promise.
orders are fulfilled on time. Furthermore, the model is also constrained by inventory level, order batch size, supplier capability, and set-up cost constraints. Those search dimensions are applied one by one in order to fulfill the customer’s request. It is easy to observe that the above model emphasizes an iterative approach to solve the ATP problem. The ATP problem, however, goes far beyond the idea. The promise, however, must be fulfilled by the supplier, the manufacturer, and the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
156 Y. Kristianto et al.
distributors. This idea supports ASDN by moving the previous APS paradigm from enterprise APS into the supply chain APS (see Fig. 6). Related to this idea, this chapter, however, shifts some tasks of ATP to master planning by customizing the push-pull manufacturing strategy for each product type and assessing the supply strategy according to sourcing options. Thus, ATP module functions are limited to inventory level and lead times optimization. The impact of this stage can be explained in two ways. First, the global decision within supply chains is more represented by responsibility on all sides (the distributors, manufacturers, and suppliers) so that resources assignment are also possible to be developed across supply chains. Second, it is easier to expand the supply network planning in the future by partially adding new members within the supply chains. This is reasonable since, for example, if the demand continuously increases in the future so that one component needs to be supplied by more than two suppliers, then the APS can collaborate with them. 2.4. Production Planning and Scheduling This module is intended for short-term planning within APS so that it sequences the production activities in order to minimize production time. In detail, Stadtler (2002b) describes a model for a production schedule as in Fig. 10. Figure 10 depicts the production schedule model building, where it extracts daily operational information in the ERP such as locations, parts, bills-of-material (BOM), production routing, supplier information, set-up matrices and timetables
1 1. Model building
2. Extracting the required data from ERP system, master planning
6. Scenario ok?
3. Generating a set of assumptions (a scenario)
4. Generating an initial production schedule 5. Analysis of the production schedule and interactive modification
7. Executing and updating the production schedule via the ERP system until an “event” requires optimization
Figure 10.
Production planning and scheduling procedure (from Fleischmann, 2002).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
157
(Stadtler, 2002b). This chapter applies the similar optimized scheduling to the entire products by using Traveling Salesperson Problem (TSP) algorithm. 2.5. Collaborative Material Planning In contrast to the traditional approach of operation management tools, where material requirement follows a top-down hierarchical approach, and starts with Master Production Schedule (MPS), where the schedule is then detailed into Material Requirement Planning (MRP) by ignoring capacity constraint and assuming fixed lead times. This chapter, however, replaces the MPS and MRP functions by applying collaborative material planning (see Fig. 6) consisting of supplier and buyer integration by including a system dynamics approach (see Fig. 8) by following supply synchronization model and replacing the MRP with collaborative material planning (Holweg et al., 2005). It is interesting that the model incorporates purchasing and product development, which is useful to provide information to master planning not only the internal capability (ATP and production planning) but also the supplier capability about how long the maximum time and how many are to deliver the component. 2.6. Distribution and Transport Planning Distribution planning is very much correlated with transport agreements for shipping consumer goods from manufacturers to customers. Shipments could go directly from the factory or from distribution centers to customers, depending on the order types and distances. This typical distribution channel enhances supply chain integration among manufacturers, distributors and customers, who need to plan ahead of time. Furthermore, integrated transport planning decreases the cost substantially. The relatively smaller shipments account for higher costs than larger ones. The distribution and transportation costs also depend on the locations of factories, suppliers, DCs (distribution centers) and TPs (transshipment points). Correlation between distribution and transport planning module and other APS modules as described by Fleischmann (2002) can be summarized in Fig. 11. In this chapter, ASDN is used to investigate the profitability of supply chain networks by considering transportation as well as distribution centers. By applying information from demand and master planning, ASDN enables us to find the supply chain profit, inventory value, and total lead times. Even this software ignores iterative procedures for network optimization. The model is however, can be represented as strategic network planning below. 2.6.1. Strategic networks planning In strategic network planning, firms generally focus on long-term strategic planning and design of their supply chain (see Fig. 6). Therefore, it is related to long-term
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
158 Y. Kristianto et al. Strategic network planning 1. location of factories, suppliers, DCs, and TPs 2. Transport modes and paths 3. Suppliers and customers allocation
Demand planning 1. Delivered customer order 2. DCs demand forecast 3. DCs safety stock
Distribution and transport planning
Master planning 1. Aggregate quantities to be shipped on every transport link 2. Seasonal stocks dynamic at warehouses and DCs
Figure 11.
Production scheduling 1. Net requirements, timed at the planned departure of shipment from the factory 2. Planned and released production order
Distribution and transport interfaces.
decisions, such as plant location and physical distribution structure (Meyr et al., 2002). During the process, some compulsory information, for instance the product family structure and market share, potential suppliers and manufacturing capability, is utilized to decide whether this planning is expansion or collaboration. For example, a car company may wish to expand its market into the new area. They may choose to develop their own business by locating some facilities (factories, distribution centers, and warehouses) or consolidating with another existing company. It is also possible to re-evaluate the previous strategic plan, for instance the manufacturer intends to relocate its factories to a country with cheaper labor costs. This brings them advantages such as a cheap labor market, low cost of raw materials, and the opportunities for new business markets locally. Due to its impact on long-term profitability and competitiveness within a company, the planning depends on aggregate demand forecasting and economic trends in the market. It is, therefore, a challenging task since the planning period ranges from 3 to 10 years, where all the decision parameter conditions may change, for instance customer demand behavior, market power, and supplier capability. This strategy becomes complicated if companies execute their strategic planning infrequently and do not update periodically. The main objective of this type of planning related to value chain re-engineering is to reconfigure the manufacturing process, which is embodied by developing ASDN (Fig. 12). Therefore, the model will collect information from medium- and short-term planning, for instance vendors and distribution facilities among suppliers, distributors, and manufacturers to be optimized against product configuration. The interfaces among them are depicted as follows.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
159
Sales planning
ATP, material planning, etc.
Figure 12.
Demand and master planning
Strategic network planning (ASDN)
Strategic network planning and customer needs alignment.
Importance of delivery time, available to promise, On Time Delivery (OTD) etc.
Demand parameter
Enterprise strategy
Supply parameter
ASDN networks Modeling Demand pattern distribution variation
Capacity time delays OTD Quality Supply demand networks strategy
Lot sizing decisions ordering policies: lot for lot, periodic etc. ABC analysis
Architecture of networks order decoupling point location policy: MTS, ATO, MTO, ETO
Sales and operating planning
Inventory execution
Cycle stock/safety stock
Figure 13. ASDN approach for networks design.
Figure 12 depicts the planning connection to the product database, which is used to reconfigure the demand and master planning where it will be used to reconfigure the strategic network planning. Furthermore, the details of the ASDN operations can be represented as Fig. 13. Beforehand, it is beneficial to study further from the existing APS software in order to find the path for improvement. This report takes two APS software examples, namely SAP APO and ASPROVA APS, and these are described in more detail in the next section.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
160 Y. Kristianto et al.
3. Contribution to APS Software Development APS has increasingly been used instead of Enterprise Resource Planning (ERP), which is also implemented in several commercials software, for example, SAP APO and ASPROVA. Furthermore, this chapter looks beyond comparison to the possible further development of the software by regarding the above architecture as follows. 3.1. SAP Advanced Planner and Optimizer (APO) SAP Advanced Planner and Optimizer (APO) is a well-known software that represents an example of APS software package. SAP APO is designed for supporting the planning and optimization of a supply chain and works via both linkages to ERP-packages and also on its own. Structures of many other software packages follow the same structure (Buxmann and K¨onig, 2000, p. 100): 1. The planning modules consist of procedures for “Demand Planning,” “Supply Network Planning,” “Production Planning and Detailed Scheduling,” and “Available to Promise.” 2. User interface (UI) “The Supply Chain Cockpit” gives the chance of visualizing and controlling the structure of logistics chains. The UI facilitates the graphical representation of networks of suppliers, production sites, facilities, distribution centers, customers, transshipment locations. Additionally, by using the Alert Monitor engine it is possible to track supply chain processes and identify eventinitiating problems and bottlenecks. 3. Solver is an optimization engine that employs various algorithms and solution procedures for solving supply chain problems. This includes forecast modeling techniques such as exponential smoothing and regression analysis being built in for demand planning, and also branch and bound procedures and genetic algorithms are available for production and distribution planning. 4. Simulation of changes is enabled by an architecture for computing and dataintensive applications that makes it possible for simulations, planning, and optimization activities to be in real time. In this software, optimization is bounded into optimization range and resources allocation. The optimization range is different according to whether optimization horizon or resources are transferred. The optimization horizon will optimize each activity in the optimization range, however, due to interrelation between activities in these two regions. These fixed activities determine their action according to their flexibility. Below is described the relationship table for scheduling optimization (Table 1). Another SAP APO facility is networks design. Networks design creates an analysis of entire networks with regards to locations, transportation networks, facility location, and even analysis of current territorial divisions. In practice, these designs
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
Table 1.
161
Relationship Table in Scheduling Problem by SAP APO.
1st activity
2nd activity
Fixed Fixed Non fixed Non fixed
Non fixed Non fixed Fixed Fixed
Relationship
Definition
Maximum interval Minimum interval Maximum interval Minimum interval
Latest start or finish date Earliest start or finish date Earliest start or finish date Latest start or finish date
comprise inbound and outbound logistics planning such as sourcing decision, transportation mode determination, and warehouse location evaluation according to different demand supply patterns, varying costs, and capacity constraints. The discussion on SAP APO produces the following conclusions: 1. The user interface in SAP APO helps the APS planner to investigate the profit performance of the entire supply chain. This report uses ASDN to represent the same objective. 2. Solver optimizer is used in SAP APO to optimize the scheduling problems and demand forecasting. This chapter, however, applies an optimization tool to optimize supply and manufacturing strategy. This chapter enhances the function of optimizer from operational to tactical and strategic levels. 3. SAP APO excludes supply side optimization in terms of long-term planning (Stadtler, 2005), in which it is important to support ATP. This new model, however, puts the planning in the higher hierarchy by positioning material planning collaboration comprising of product development, procurement, and production functions. 4. As well as these advantages, this model has a limitation related to distribution and transport planning, where the optimizer needs to be developed. 3.2. ASPROVA APS ASPROVA APS is developed by the following logic: Figure 14 is taken from the ASPROVA APS main menu, which exhibits the production scheduling process that is taken by receiving the order and shop floor data to build a production schedule. The scheduling operator receives master data (production capability) in order to issue work instructions and purchase order to the suppliers. ASPROVA APS, however, is concerned about scheduling operations instead of creating whole APS components, for instance demand planning, master planning, and transportation and distribution scheduling. Some limitations of this software are: 1. ASPROVA APS does not apply demand planning, for instance capacity or manufacturing strategy planning;
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
162 Y. Kristianto et al. Order data
Master data Scheduling
Result
Shop floor
Work instruction
Purchase orders
Figure 14. ASPROVA APS operation image.
2. ASPROVA APS does not visualize the supply chain network optimization and 3. The impact of the two limitations is that ASPROVA APS is not able to link itself to supply chain execution program and ERP and is just a stand-alone tool. 4. Problem Example Below is one example of the APS application in the truck industry, which is represented as Fig. 15. The varieties of the above product’s structure are illustrated in Tables 2 and 3: From the example, this section will explain step by step the detail of the modeling, as follows: 4.1. Demand Planning The demand planning process is originated from the forecasting part, which is followed by capacity planning, promised lead times, push-pull manufacturing strategy, material and inventory requirements. The planning can be shown in detail by using the following example: 4.1.1. Forecasting Forecasting is required for long-term capacity planning instead of weekly demand. The reason is that this APS is intended to customize orders. This chapter does not go into deep discussion of forecasting techniques because we can use any available technique and it depends on the demand pattern. Otherwise, in general, we can use time series analysis by assuming that demand increases because markets and customers expand continuously. 4.1.2. Capacity decision Capacity decision is established first to give information to the firm with regard to supply and manufacturing strategy. This chapter applies newsboy problem to
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
163
Radio
1 Audio package
1 1
1 Office package
CD player
1 Speaker
1
Interior decoration
Cabinet
1
1 Resting package 1
1
Body
Truck
Engine 1
1
1
Power train
Gear box
Chassis 1
1
Suspension
1 1
Frame Front axle Rear axle
1 Tire
1
1
Front wheel
Rim 1 Wheel sheet 1
Tire 1
Rear wheel
Rim 1
Figure 15.
Bill-of-Material (BOM).
minimize over and under stock, as follows: E(C) = h · E(Q − D)+ + p · E(D − Q)+ By operating integration into Eq. (7.1) we get: 1 + (Q · x − D) · dx + p · E(C) = h · D Q
D Q
(7.1)
(D − Q · x)+ · dx
0
h · (Q − D)2 + p · D2 (7.2) 2·Q By optimizing Eq. (7.2) according to Q, optimal production quantity (Q) can be determined as: Q1,2 = p + h · D (7.3) =
Equation (7.3) gives the result of capacity decision.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
164 Y. Kristianto et al.
Table 2. Truck Parts List. Parts Body Office package Interior decoration Resting package Radio CD Player Speaker Engine Gear box Frame Front axle Rear axle Tire (Front) Rim (Front) Tire (Rear) Rim (rear)
Table 3.
Model FH1
Model FH2
FHDA Opl00 FHDA1 RP001 FH001, FH002 6 disc Doors D13A−360HP Powertronic 5sp 4 2 FSH 1370 Hub reduction 1370 385/65-22,5 FR22,5 315/70-22,5 FR24,5
FHDA Opll0 FHDA2 RP002 FH003, FH004, FH005 6 disc Doors+rearWall D13A−400HP Powertronic 5sp 6 2 FSH 1370 Hub reduction 2180 385/65-22,5 FR22,5 315/70-22,5 FR24,5
Required Parameters for Product Manufacturing.
Penalty cost Holding cost Annual demand Order cost Production cost Setup cost Material cost Production rate per month
Model FH1
Model FH2
1 1 50 1 10 4 1 200
15 4 50 1 10 4 1 200
4.2. Master Planning 4.2.1. Push-pull manufacturing strategy The Customer Order Decoupling Point (CODP) is assigned properly to the components or parts which are fabricated internally. In this chapter, we categorize CODP according to make-to-stock (MTS), assemble-to-order (ATO), or make-to-order (MTO). The objective is to give the least waiting time and operations costs (holding, penalty, and production cost). We define processing time in one node consisting of supplier delivery time, production and delivery time to the customer, so let us assume that demand has
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
165
inter-arrival variance (σA ) and the assembly process has process time variance (σB ). According to GI/G/1 queue system, we have: λ2 · (σA2 + σB2 ) +ρ 2 · (1 − ρ)
L=
(7.4)
where L is the number of order, λ is the demand rate. ρ is the utilization factor. This last equation informs us about whether there is a queue or not in our production line. In order to determine our optimum decision, we use these into our cost function E(C) = CP · µ + CW · L, where CP is order processing cost and CW waiting cost (Table 4). The above cost function can be generalized into: λ2 · µ · (σA2 + σB2 ) E(C) = CP · µ + CW · +ρ (7.5) 2 · (µ − λ) Equation (7.5) can be optimized according to µ so that we have: (σA2 + σB2 ) · CW (σ 2 + σB2 ) · CW · µ =0 − A 2 · (µ − λ) 2 · (µ − λ)2 2 · (σA2 + σB2 ) · λ · CW · CP ≤ µ∗ ≤ 2 · λ 2·λ− 2 · CP 2 · (σA2 + σB2 ) · λ · CW · CP + 2 · CP CP +
(7.6)
(7.7)
Equation (7.7) can be modified by positing λ as a dependent variable and µ as an independent variable so that we have: µ∗ +
2 +σ 2 )·λ·C (σA W B 2CP
2 Table 4.
≤λ
(7.8)
Push-pull Manufacturing Decision for Each Component.
Product
σA σB
Radio CD player Speaker Front tire Front rim Rear tire Rear rim Truck FH1 Truck FH2 Power train
10 10 10 20 20 40 40 40 40 10
10 10 10 20 20 40 40 40 40 10
1 100 100 100 200 200 400 400 100 100 100
C w Cpr µ upper µ lower µ actual MTO/MTS/ATO 1 1 1 1 1 1 1 1 1 1
5 5 5 5 5 5 5 5 5 5
245 245 245 526 526 1158 1158 379 379 200
155 155 155 274 274 442 442 21 21 155
20 200 100 200 200 200 200 200 200 200
MTS ATO MTS MTS MTS MTS MTS ATO ATO ATO
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
166 Y. Kristianto et al.
Equation (7.8) is a prerequisite to form postponement. If λ exceeds that limit, form postponement should be changed to time postponement and vice versa. This strategy, however, enables the supply chain to determine the right time for switching from assemble to order to make to order and vice versa. This repositioning strategy can also be used for over-production rate. 4.2.2. Supply strategy Supply strategy is defined as deciding which parts should be ordered from the suppliers, and which parts should be produced in-house. This discussion will be separated into two models, make or buy decision, and single or dual sourcing strategy, which is detailed as follows. In the outsourcing case, suppose the supplier and firm have established a longterm contract by choosing the incentive and penalty cost I and p for the suppliers. The firm gives incentive to the suppliers whenever they can meet the firm’s customer demands D in the predetermined range at D±ε∗t . If production accuracy (εt −ε∗t ) is to be a common objective between the suppliers, then for each i, ε∗t must maximize the supplier’s expected profit, net of penalty and holding costs (ε∗t ) must solve: max I · Pr ob{q(εt ) = q(ε∗t )} − p · Pr ob{q(εt ) < q(ε∗t )} εt ≥0
− h · Pr ob{q(εt ) > q(ε∗t )} = (p − h) Pr ob{q(εt ) > q(ε∗t )} − p + (I + p) Pr ob{q(εt ) = q(ε∗t )}.
(7.9)
The first-order condition for Eq. (7.9) is: ∂ Pr ob{q(εt ) > q(ε∗t )} = (I + p) Pr ob{q(εt ) = q(ε∗t )} (7.10) (p − h) ∂εt That is, the firm and the suppliers choose incentive I, penalty p, and holding h costs, such that the over or under estimation of variance, Pr ob(εt − ε∗t ) > 0, is minimized, which is the probability of over estimation. From Bayes rule, Pr ob{q(εt ) > q(ε∗t )} = Pr ob{εt > qt∗ + ε∗t − qt } ∗ Pr ob{εt > qt∗ + εt − qt |εt }f(ε∗t )dε∗t Pr ob{q(εt ) > q(εt )} = εt
Pr ob{q(εt ) > q(ε∗t )} =
εt
1 − F(qt∗ + εt − qt )f(ε∗t )dε∗t
(7.11)
So the first-order condition for Eq. (7.11) becomes: (p − h) f(qt∗ + εt − qt )f(ε∗t )dε∗t = (I + p) Pr ob{q(εt ) = q(ε∗t )} εt
In a steady state (i.e., qt∗ = qt ), we have: (p − h) f(εt )2 dεt = (I + p) Pr ob{q(εt ) = q(ε∗t )} εt
(7.12)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
If ε is normally distributed with variance σ 2 for example, then 1 f(εt )2 dεt = √ 2σ π εt p−h = Pr ob{q(εt ) = q(ε∗t )} √ 2σ π(I + p) if σ is assumed to be continuously distributed (N (εt − ε∗t )2 p(εt )dεt , and Eq. (7.13) becomes:
(7.13) (7.14)
→ ∞), then σ
(p − h)2 (εt − ε∗t ) = √ 2 π(I + p)2
167
=
(7.15)
λ ·CO By defining the total cost to the firm as c = h(εt −ε∗t )+ +p·(ε∗t −εt )+ + D
and replacing (εt − ε∗t )+ with then we have: coutsource =
h
2 √(p−h) 2 2 π(I+p)
and doing some integration operations,
2 2 √(p−h) 2 2 π(I+p)
+ p(ε∗t )2 λ + · CO 2 (p−h) D 2 2√π(I+p)2 + ε∗t
(7.16)
While with in-sourcing we have the following costs function: E(TC)Insource = h · E(Q − D)+ + p · E(D − Q)+ + CD · Z
D λ · CO + CP · tS + + CPur · q (7.17) + D µ D is order quantity and Q production capacity. For analysis simplification, we will represent our part inventory as (Q − D)+ and part backorder as or (D − Q)+ . Equation (7.17) can be solved by integrating the first two statements as: 1 D Q + E(TC)Insource = h · (Q · x − D) · dx + p · (D − Q · x)+ · dx D Q
0
D λ · CO + CP · tS + + Cpur q + D µ
And we get, E(TC)Insource =
h · (Q − D)2 + p · D2 2·Q
λ D + · CO + CP · tS + + CPur · q D µ
(7.18)
where λ is demand rates, CO order cost, CP production cost, tS setup cost, µ production rate, CPur material cost, and q material quantity.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
168 Y. Kristianto et al.
Our decision is as follows: if E(TC)Insource > coutsource , then outsourcing is chosen, otherwise, insourcing is the option. In addition to sourcing strategy, a procedure is suggested below to choose whether single or dual sourcing is an appropriate option, as follows. 4.2.3. Single or dual sourcing strategy (buy decision) In this section, suppose outsourcing is the best option and now the manager is facing a dilemma between single and dual sourcing. In this section, we consider a Bertrand duopoly model (see Gibbons, 1992) with price function for retailers given by: q = b − p1 + γ · p2 + ε∗t
(7.19)
where pi and pj is price of the supplier 1 and 2 and γ is the supplier process commonality. Different to Elmaghraby (2000), the buying decision is approached according to price uncertainty. This chapter takes into account quantity uncertainty in order to represent demand variety. It also accommodates Forker and Stannack’s (2000) argument of applying competition between suppliers; indeed, the suppliers’ cooperation is also considered by applying product compatibility degree γ. In the Cournot game, suppliers choose their own price to maximize their profit by taking their opponent’s price as a given. We thus propose a methodology which is similar to the Cournot game, except that we take into account the quantity at infinite time in order to optimize the postponed product compatibility decision resulting from the presence of a long-term price contract. To illustrate, we suppose two suppliers make an auction and the firm makes an opening bid, and afterwards the suppliers cooperate with one another on the chosen price and product compatibility. Restricting attention to the sub-game perfect of this two-stage game, we shall see that if the firm chooses a bid-price, then the predetermined price is used by the suppliers to optimize the auction price, where it is finally used by the suppliers to optimize their production quantity. The firm does not have any benefits by shifting from their bid price, while the supplier also has no reason to threaten the retailers. From this point on, the game starts from stage 1, where both retailers decide their capacity. Stage 1: the firm and suppliers optimize their agreed product price according to maximum profit max(b − p1 + γ · p2 + ε∗t )(p1 − coutsource ) p1
(7.20)
The first-order condition is: −2p1 + γ · p2 + b + coutsource + ε∗t = 0
(7.21)
Similarly, the FOC from second product variant is: −2p2 + γ · p1 + b + coutsource + ε∗t = 0
(7.22)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
169
Solving these two equations simultaneously, one obtains: coutsource + b + ε∗t (7.23) 2−γ Stage 1 explores the price equilibrium between two suppliers. Equal price in this equation shows that the suppliers are working under flexible capacity in all states or the suppliers producing to order and accumulate commitments for all future deliveries. There is always an equilibrium in which all the suppliers set p1 = p2 in all periods. The suppliers expect profit to be zero whether they cooperate at time t or not. Accordingly, the game time t is essentially a one-shot game in which the unique equilibrium has all suppliers setting p1 = p2 . Furthermore, both buyer and supplier can take advantage of this problem because whenever a supplier increases his selling price, the buyer product price also increases. In the same way, the firm bargains the supplier’s price at pf in order to maximize their profit by taking a maximum margin between product prices to end customer pb and outsourcing price pf as follows: p2 = p1 = ps =
max(b − pf + ε∗t )(pb − pf ) pf
(7.24)
The first-order condition is: 2pf − b − pb − ε∗t = 0
(7.25)
Solving that equation for pf , one obtains: pb = 2pf − b − ε∗t
(7.26)
If we assume at the final bargaining period that pf = ps , then we have: coutsource + b + ε∗t (7.27) − b − ε∗t pb = 2 2−γ max πtot = πS + πf ps
max πtot = ps
max(b − pf + ε∗t ) p s
coutsource + b + ε∗t ∗ 2· − b − εt − pf 2−γ
+ (b − (1 − γ)pf + ε∗t )(pf − coutsource )
(7.28)
s.t (pf − coutsource ) ≥ 0 The first-order condition is: coutsource + b + ε∗t ∗ − b − εt −b+(1−γ)·coutsource = 0 (7.29) 2γ ·pf −b− 2 · 2−γ Solving that equation for pf , one obtains: c +b+ε∗t ∗ + b − (1 − γ) · c 2 · outsource − ε outsource t 2−γ pf = 2γ
(7.30)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
170 Y. Kristianto et al.
Equation (7.30) describes the compromise price between the firm and suppliers. This equation is also developed in order to respond to Anton and Yao’s (1989) argument about the supplier’s collusion. Stage 2: The firm and suppliers optimize the suppliers’ material price On the suppliers’side, in order to achieve optimal profit, then we have also optimized the material price, as follows: In the first stage we can find, max(b − (1 − γ)pf ) · (pf − cm ) pf
(7.31)
By optimizing Eq. (7.31) against pf , then the supplier material cost cm can be found as: b − 2(1 − γ) · pf + cm (1 − γ) = 0
(7.32)
2(1 − γ) · pf − b (7.33) 1−γ Stage 2 shows that the increasing of product substitutability (γ) will increase the suppliers’ total costs. With regard to the result, below a process commonality and pricing-quantity decision is produced by considering long-term relationships between the firm and suppliers (Patterson et al., 1999). cm =
4.2.4. Component commonality decision between two suppliers In the last stage, product design is collaborated between the firm and the suppliers, which is intended to maximize the firm and the supplier’s profit. In that case, suppose the suppliers profit function Eq. (7.31) is used to define γ as follows: coutsource + b + ε∗t (7.34) γ = 2− pf Eq. (7.34) shows that the increasing of coutsource as well as supplier selling price pf will increase product substitutability (γ). With regards to the result, below is the supplier’s selling price strategy and production quantity optimization for maximising suppliers’ profit. 4.2.5. Selling price strategy In this modeling, we define profitability for the buyer as in the Bertrand game, as follows: max(b − pi + γ · pj )(pi − c) pi
(7.35)
where pi and pj are supplier i and j the selling price, respectively and b is maximum available quantity for the buyer. The first-order condition is: b − 2pi + γ · pj + c = 0
(7.36)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
171
Similarly, the FOC from insourcing is: b − 2pj + γ · pi + c = 0
(7.37)
Solving these two equations simultaneously, one obtains: P = pi = pj =
(γ + 2) · (b + c) 4 − γ2
(7.38)
Equation (7.38) shows that higher γ produces a positive impact on product price to the end-customer. From this point on, suppliers’ product price pf is used to find the optimum production quantity for the suppliers as follows. Stage 2 Quantity decision This chapter applies a similar principle to that of Singh and Vives (1984), except that we take into account both the price and quantity at infinite time in order to optimize supply chain profitability resulting from the presence of long-term price and production quantity contract. This stage is developed by finding the best price response against price decision, which results from the Bertrand pricing game, and it is shown as follows: ps (t) = s(ps − ps (t)); p > 0; ps (0) = ps(0) ps = pf
(7.39)
In Eq. (7.39), we recognize s as speed of quantity to go to its optimal value. This speed represents how much time is needed by both firms to negotiate their price contract. This notation finally becomes insignificant when such a negotiation is done at an infinite due date, where both firms are assumed to have enough time to analyze their decision. To solve Eq. (7.39), let us set up a current-value Hamiltonian as: H = q(ps − c) + λs˙q
(7.40)
Subject to Eq. (7.39), q(t) ≥ 0, where λ is per unit change of objective function (max π(q) ) for a small change in q(t). In the following derivation, we will recognize s and ρ as compound factor and discount rate. ∂H = ps − λ · s · q(t) = 0 ∂ps ∂H λ˙ 1 = δ · λ˙ 1 − = λ1 (δ · + · s) − q = 0 ∂q Steady-state quantity can be found from Eq. (7.42) as: √ lim q = ps s→∞
(7.41) (7.42)
(7.43)
We can see that equilibrium quantity is a concave function of price. In conclusion, quantity postponement gives significant impact to the supplier-buyer supply chain whenever both buyers agree to improve their product commonality.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
172 Y. Kristianto et al.
From Eq. (7.43), the total quantities produced by both suppliers can be summarized as: c + b + ε∗t q 1 = q2 = q ∗ = 2 (7.44) 2−γ Equation (7.44) gives a solution for the suppliers about optimum capacity, which is used by the suppliers to fulfill orders according to the firm purchase price and quantity. Furthermore, we have ε in Eq. (7.44), which denotes observable demands variance from the firm to the suppliers. This variance gives significant impact to the supplier’s willingness to cooperate in product design and at the same time pushes the firm to reduce its demand information inaccuracy to the suppliers (Tables 5 and 6). 4.3. ATP ATP consists of promised lead times and inventory requirement as follows.
Table 5. Product Body Office package Interior decoration Radio CD player Speaker Engine Gear box Frame Front axle Rear axle Front tire Front rim Rear tire Rear rim Power train Suspension Rear wheel Front wheel Audio Cabinet Chassis
b 245 141 141 141 141 141 346 346 346 346 346 346 346 566 566 71 218 141 141 283 283 141
Sourcing Decision for Each Component. p h I D C0 Cpur Error ts 4 1 1 1 1 1 10 10 10 10 10 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
100 100 100 100 100 100 100 100 100 100 100 200 200 400 400 100 100 200 200 100 100 100
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
5 20 15 5 5 5 100 70 70 20 20 5 3 5 3 3 3 3 3 3 3 3
0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1
2 1 1 1 1 1 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1
µ Cprod 500 300 200 20 200 100 20 20 200 50 50 200 200 200 200 200 200 200 200 200 200 200
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
Decision Dual-sourcing Dual-sourcing Dual-sourcing Insourcing Insourcing Insourcing Dual-sourcing Dual-sourcing Dual-sourcing Dual-sourcing Dual-sourcing Insourcing Insourcing Insourcing Insourcing Insourcing Dual-sourcing Dual-sourcing Dual-sourcing Dual-sourcing Dual-sourcing Dual-sourcing
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
Table 6.
173
Price and Product Platform Decision for Each Component.
Product Body Office package Interior decoration Radio CD player Speaker Engine Gear box Frame Front axle Rear axle Front Tire Front Rim Rear Tire Rear Rim Power train Suspension Rear wheel Front wheel Audio Cabinet Chassis
γ
pf
c
a
pb
Total profit
0,5 0,5 0,6 0,6 0,5 0,6 0,5 0,5 0,5 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6
261,1 179,1 169,6 170,7 140,4 170,7 411,0 410,3 410,5 416,3 416,3 418,2 418,4 682,9 683,1 85,5 263,3 170,9 170,9 341,6 341,6 170,9
32,3 75,4 15,5 5,6 −2,1 5,5 129,2 73,3 72,4 22,4 22,4 5,5 3,5 5,4 3,4 3,9 3,6 3,8 3,8 3,8 3,8 3,5
150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0
372,2 208,2 189,1 191,3 130,7 191,3 671,9 670,6 670,8 682,6 682,6 686,2 686,7 1215,6 1216 20,9 376,5 191,7 191,7 533,1 533,1 191,8
7717 12692 21109 23104 3159 23132 53241 63193 63408 73574 73574 127903 128689 382296 383876 4891 43579 21021 21021 60553 60553 23534
4.3.1. Promised lead times Promised lead times are divided into two different models, namely Make-To-Stock (MTS) Make-To-Order (MTO) lead times, which are used by production scheduling to setup the sequence and it is gathered by applying newsboy vendor problem, as follows: LT ∗ E(CLT ) = p · E(LT − LT ∗ )+ + h · E(LT ∗ − LT)+ , LT = √ (7.45) p+h d ∗ ∗ =Q where: LTMTO/ATO µ and LTMTS = s , d = distance from factory to customers and s = vehicle speeds. The data required for the promised lead times for truck FH1 and its components are summarized in Table 7.
4.4. Collaborative Material Planning In contrast to the traditional approach of operation management tools, where material requirement follows a top-down hierarchical approach, and starts with Master
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
174 Y. Kristianto et al.
Table 7. Product/parts Body Office package Interior decoration Radio CD player Speaker Engine Gear box Frame Front axle Rear axle Front Tire Front Rim Rear Tire Rear Rim Truck FH1 Truck FH2 Power train Suspension Rear wheel Front wheel Audio Cabinet Chassis
Promised Lead Times.
Q
µ
p
h
LT
LT1,2
100 100 100 100 100 100 100 100 100 100 100 200 200 400 400 50 50 100 100 200 200 100 100 100
500 300 200 20 200 100 20 20 200 50 50 200 200 200 200 200 200 200 200 200 200 200 200 200
4 1 1 1 1 1 10 10 10 10 10 1 1 1 1 1 15 15 15 15 15 15 15 15
2 1 1 1 1 1 2 2 2 2 2 1 1 1 1 1 4 4 4 4 4 4 4 4
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
5,7 9,9 9,9 9,9 9,9 9,9 4,0 4,0 4,0 4,0 4,0 9,9 9,9 9,9 9,9 9,9 3,2 3,2 3,2 3,2 3,2 3,2 3,2 3,2
Production Schedule (MPS), the schedule is then detailed into Material Requirement Planning (MRP) by ignoring capacity constraints and assuming fixed lead times. This chapter, however, replaces the MPS and MRP functions by applying collaborative material planning (see Fig. 6) consisting of supplier and buyer integration by including a system dynamics approach. A feedback control mechanism is used to maintain optimal condition, which is represented as a two tanks interaction, as follows. Figure 16 depicts an interaction between buyer and supplier. This model modifies Holweg et al. (2005) model (synchronized supply) by replacing the inventory level with product substitutability degree (γ), by considering product commonality. It is interesting that the model incorporates component residence time in the supplier’s (A1 ) and manufacturer’s (AR ) warehouses, which is useful to give information to the warehouse manager about how long the maximum time is to keep inventory. Tank R (buyer) production rate depends on Tank 2 (supplier) production rate (and vice versa) as a result of the interconnection of both production rates with
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering q2
λ
A1
γ1
AR
q1 R1
Figure 16.
175
γR
Q
RR
Feedback control application and built to order supply chains.
production quantity q1 . This analogy is taken from fluid dynamics, which states that a longer fluid transfer time is caused by high transportation hindrance (R) and production rate difference (µ1 − µR ). If we assume that total stock is the tanks’ volume and product substitutability γ is their levels, then either AR and A1 can be found by dividing the manufacturer total stock (TSR = SSR + CSR ) by its product commonality (γ) or 1 Q + Q − D 2(Q − D)
TSR = SSR + CSR = z · σR TS1 = SS1 + CS1 = z · σ1
AR =
A1 =
z · σR z · σ1
1 Q−D
+
1 q + q − Q 2(q − Q)
Q 2(Q−D)
γR 1 q−Q
+
q 2(q−Q)
γ1
(7.46)
(7.47)
(7.48)
(7.49)
where z is the end customer service level, σ1 is the supplier delivery standard deviation, and σR is the manufacturer delivery standard deviation. The promised lead times of the manufacturer and the supplier are formulated as: 1 Q−D 1 R1 = L1 = q−Q
RR = LR =
(7.50) (7.51)
where LR and L1 represent the manufacturer and the supplier lead times for in-house production, while in the case where the manufacturer outsources the manufacturing process, then Q and q represent the manufacturer assembly capacity and the supplier production capacity. First, an open loop interacting system is discussed before a
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
176 Y. Kristianto et al.
further discussion on closed loop built-to-order supply chain. µR (s) = µ1 (s)
RR R1 +RR R1 RR AR R1 +RR s + 1
(7.52) D(s) 1 = Q(s) KR
Q(s) RR = 2 2 q(s) τ s + 2ςτs + 1
(7.53)
We recognize KR in Eq. (7.53) which denotes the manufacturer response to customer demands. The higher the value the higher the manufacturer’s responsiveness. Time constant (τ) represents the supplier responsiveness to customer order ζ in Eq. (7.53) is the decoupling point signal which provides a sign of the customer order penetration point, that is, assembly-to-order (ATO) or MTS. Looking at ζ value helps us to detect lead time variability. Lead times tend to be shorter when ζ < 1 while ζ > 1 yields a sluggish response, while faster response without overshoot is obtained for a critically damped case (ζ = 1). In general, ζ < 1 indicates that the manufacturer is operating under MTS, while ζ < 1 signs ATO. Hereafter, according to the control theory of interacting system, 2ζτ and τ 2 can be formulated as: 2ζτ = RR AR + R1 A1 + RR A1
(7.54)
τ = R1 RR A1 AR 2
(7.55)
Equation (7.55) represents an open loop without information feedback so that supplier has only access to buyer inventory without considering customer demand. Open-loop control can be drawn as Fig. 17. From this point on, a closed-loop system can be formulated by joining Eqs. (7.50)–(7.55) to be: RR q(s) = 2 2 q2 (s) KR (τ s + 2ςτs + 1)
(7.56)
q2
A1
γ1
q
R1
Figure 17.
AR
γR
Q
RR
Open loop: two interacting processes.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
q2
RR K R τ s + 2ζτs + 1
(
)
2 2
177
Q
−1 Qset
Σ
KC Figure 18.
Closed feedback control transfer function.
Q(s) = KR = GR q(s)
(7.57)
With closed-loop feedback control as shown in Fig. 18. KC in Fig. 18 represents information visibility between the manufacturer and the supplier. The larger the gain, the more the supplier delivery quantity will change for a given demand information change. For example, if the gain is 1, a demand information change of 10 percent will change the supplier delivery quantity by 10 percent. KC decision is important to the interacting system because it affects simultaneously the supply chain inventory (buyer and supplier) and order lead times. KC depicts process visibility from manufacturer to supplier so that the higher value means higher visibility. Information visibility (KC ) needs to be adjusted according to product commonality requirements (see Sec. 4.2.4) in order to fulfill the lead time requirement. Finally, Fig. 18 can be used to construct a time domain dynamics of synchronized supply by finding its open-loop transfer function as follows: R KC K (τ 2 s2R+2ςτs+1) Q(s) R = R Q(s)set 1 + KC K (τ 2 s2R+2ςτs+1) R
=
KC · RR KC · RR + KR (τ 2 s2 + 2ςτs + 1)
(7.58)
So that we have roots of denominator as:
s1,2 =
−
2ςτKR KR τ 2
±
2ςτKR KR τ 2
2
C ·RR − 4 KR +K K τ2 R
2
Laplace domain dynamics according to step disturbance is applied in order to represent sudden demand change, which can be inserted directly into Eq. (7.58)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
178 Y. Kristianto et al.
and inverted to get the following inversion of the Laplace transform, as follows: Q(s) = Q(s)set
s s +
KC · RR /KR τ 2
2ςτKR KR τ 2
2 2ςτKR K +K ·R + −4 R C2 R 2 KR τ
KR τ
2
× s +
2ςτKR KR τ 2
2 2ςτKR K +K ·R − −4 R C2 R 2 KR τ
KR τ
2
(7.59)
Simplifying Eq. (7.59) then we have: 2 2ςτKR 2ςτKR C ·RR + − 4 KR +K KR τ 2 KR τ 2 KR τ 2 a= , 2 b=
2ςτKR KR τ 2
−
2ςτKR KR τ 2
2
C ·RR − 4 KR +K K τ2 R
2
Finally,
1 Q(s) e−at − e−bt = , → Q(t) = 1 − Q(t)set Q(s)set s(s + a)(s + b) (b − a)
(7.60)
Equation (7.60) presents our process modeling as a closed-loop feedback control. It describes the role of IT in demand management by presenting information exchange between the manufacturer and the supplier. 4.4.1. Optimum KC value In this chapter, optimum KC value can be found by the application of the numerical method, as follows: ∞ e−at −e−bt ∗− Q 1 − Q(t) t=1 b−a Qtransient LTtransient = = (7.61) D D where Qtransient represents production capacity at the ramp-up period. Furthermore, lead times at the normal capacity level can be calculated as: e−at −e−bt Q∗ − Q∗ − ∞ 1 − Q(t) t=1 b−a LTnormal = (7.62) D
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
179
KC value can be adjusted so that LTtransient + LTnormal = LT ∗ . From this result, the suppliers can decide how much they must supply to the manufacturer, according to KC value. Below is one example from power train, which is manufactured byATO strategy. In the previous section example, the data mentioned that power train lead time LT ∗ is 14 time unit and LTnormal is 3,21 time unit. From the data, we find LTtransient is 10,79 time unit. For this section, the only new information which is required for the simulation is the supplier capacity, and manufacturer (KR ) and supplier (KC ) responsiveness, which are trial in order to meet the transient lead time requirement. The simulation result is depicted in Fig. 19. From the simulation, we have information that KC , KR , and γ values are 0,1; 0,1, and 0,4 (independent variables). It is also found that the optimum supplier capacity is 200 units (see Table 11). Furthermore, the results of the other components can be represented as Table 8. Inventory requirement can be established from Eq. (7.46) and the results are exhibited in Table 9. We can see from Table 11 that inventory requirement is less than normal requirement whenever we apply s, Q or s, S policy. 4.5. Production Planning and Scheduling In this stage, production planning and scheduling extracts information from demand and master planning such as BOM, order and component lead times and inventory level for each component in order to produce detailed operational scheduling. This approach has been applied in other APS software, for instance SAP APO. The difference is that the application of production reconfiguration onto operational
Power train demand fulfillment dynamics 100,2
Production level
100 99,8 99,6 99,4 99,2 99 98,8 98,6 0
2
4
6
8
10
12
Time
Figure 19.
Power train order fulfillment dynamics.
14
16
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
180 Y. Kristianto et al.
Table 8.
Collaboration Between Supplier and Manufacturer.
Body Office package Interior decoration Radio CD player Speaker Engine Gear box Frame Front axle Rear axle Front tire Front rim Rear tire Rear rim Power train Suspension Rear wheel Front wheel Audio Cabinet Chassis
D
Q
q
Kc
γ
100 100 100 100 100 100 100 100 100 100 100 200 200 400 400 100 100 200 200 100 100 100
141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4
190 190 190 190 190 190 160 160 160 160 160 205 205 405 405 200 200 205 205 200 200 200
1 1 1 1 1 1 1 1 1 1 1 0,2 0,2 1 1 0,1 0,1 1 1 0,1 0,1 0,1
0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4
scheduling is supported by the application of ASDN software by giving the measurement of lead times, inventory value, and profit. The procedure is explored further in Sec. 4.5.1. 4.5.1. Production Scheduling In order to sequence the tasks of a job shop problem (JSP) on a number of machines related to the technological machine order of jobs, a traveling salesman problem is proposed by considering that it cannot produce illegal sets of operation sequences (infeasible symbolic solutions). The problem can be formulated as in Eq. (7.64) below: (7.64)
Min tn Subject to
tj − ti ≥ di tj − ti ≥ di
(i, j) ∈ O (i, j) ∈ M
(7.65) (7.66)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
Table 9.
181
Safety and Cycle Stock Requirement.
Body Office package Interior decoration Radio CD player Speaker Engine Gear box Frame Front axle Rear axle Front tire Front rim Rear tire Rear rim Power train Suspension Rear wheel Front wheel Audio Cabinet Chassis
Z
σ1
Q
q
SS1
CS1
1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69
10 10 10 10 10 10 10 10 10 10 10 20 20 40 40 10 10 20 20 10 10 10
141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4
190 190 190 190 190 190 160 160 160 160 160 205 205 405 405 200 200 205 205 200 200 200
2 2 2 2 2 2 4 4 4 4 4 4 4 4 4 2 2 4 4 2 2 2
2 2 2 2 2 2 4 4 4 4 4 2 2 1 1 2 2 2 2 2 2 2
where tn is the total makespan of the three operations within three machines for the three components. tj and ti represent the precedent operations j and i where their end and start time cannot be overlapped Eq. (65). Furthermore, the start time operation j cannot overlap the start time operation i in the same machine-M Eq. (7.66). This problem will be solved by applying MS Excel add-in facility for optimal sequencing problem as follows. Suppose we intend to schedule an audio assembly where five activities are distributed among radio, speaker, and CD player (The total lead times are 14 time unit. See Table 7). The CD player is produced by following ATO (step 1 to 5) and the radio and speaker by following MTS manufacturing strategy (step 4 to 5) (see Table 4). The detailed manufacturing times and sequence are shown below (Table 10). The Excel form of representation of this schedule optimization can be depicted as Table 11. Table 10 shows the MS excel add-in facility snapshot of job-shop scheduling, which is applied to optimize the audio manufacturing schedule. There are five steps in the manufacturing process where J1, J2, and J3 represent steps for the CD player because they must be produced as ATO. J4 and J5 denote assembly processes for
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
182 Y. Kristianto et al.
Table 10.
Detailed Audio Manufacturing Machining Time. Operations
Components
1
2
3
4
5
Radio CD player Speaker
4
5
1
2 2 3
4 1 2
Table 11. Audio Scheduling Data. Optimize Name Search Method Problem Job Job Name Next Job Sequence Obj. Terms
Seq_1 Dir. Random Value TSP Algorithm 1 2 Start J1 6 3 1 6 0
Objective Min State 13 Value None 3 4 J2 J3 7 5 4 5
Feasible TRUE 0 5 J4 2 2
6 J5 4 3
7 End 1 7
0
0
0
0
0
0
J1 4 0 0 0
J2 5 0 0 0
J3 1 0 0 0
J4 2 3 2 0
J5 1 2 4 0
End 0 0 0 0
Job Data Process Time CD Player Speaker Radio Release Time
Job Names Start 0 0 0 0
the audio package. The speaker and radio do not follow J1–J3 because they are managed as MTS. The results can be summarized in Fig. 20. Figure 20 exhibits the result of job-shop scheduling by applying the Travelling Salesman Problem (TSP). We can see from the figure that the total makespan is reduced from 16 (longest processing time from J1 to J5) to 12 time units. This results implies that now supply chains, by considering the total order lead times, have a chance to be more flexible because now they have an allowance of at least 4 time units (16 – 12). In value chain perspective, the result allows supply chains to be more competitive by reducing the likelihood of delivery lateness by giving some spaces for uncertain events such as machine down time and changeover. This scheduling optimization also enables the next planning stage (distribution and transport planning) to optimize the supply chain structure by reducing the order lead times. Finally, by applying the same procedure, we can build detailed scheduling for other components. In any case, assembly and fabrication scheduling is focused on internal factory optimization, which needs to be applied into distribution and transport planning in order to optimize the total lead times as below.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering Time: CD Player
0 J5
1 J3
2
3
J5
Speaker
4
5
J4
Radio
6
APS data
8
9
10 11 J2
12 13
J4 J5
Figure 20.
7
J1
183
J4
Scheduling Gantt chart.
ASDN structure
Figure 21.
Distributions and transportation planning.
4.6. Distributions and Transport Planning Distribution and transportation planning is used to optimize order delivery activities from supplier to factories and from factories to end users. This section utilizes demand, master, and production planning and scheduling to develop ASDN by optimizing distribution centers and transportation planning, which are exhibited as Figs. 21 and 22.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
184 Y. Kristianto et al.
Figure 22.
Financial analysis.
From ASDN simulation, Fig. 21 depicts distribution and transport planning of truck manufacturing, which is situated in the United Kingdom and the manufacturer outsources his or her components or activities across the globe. Furthermore, inventory turn, total lead times, and holding cost (cycle stock and safety stock) are also explored by presenting them in a financial chapter (Fig. 22). 5. Practical Implications The concept of value chain re-engineering is shown by giving emphasis to information availability among the supply chain and company giving added value in each step of order processing. It ensures that customers have accurate information about the available product configuration and allows them to configure not only the product but also the lead times. This mechanism can be applied within this APS because product structure database and ASDN are linked by using this proposed APS (see Fig. 12). APS in this module gives options for push-pull manufacturing strategy (Sec. 4.2.1) so that it enables the promise of order lead times as well as optimizing so that it enables the promise of order lead times (Fig. 22) as well as optimizing the aggregate inventory level (Table 8). ASDN in this case measures the value added of APS steps (demand planning, master planning, and production planning and scheduling) through financial analysis (Fig. 22). The implication of ASDN application is that the supply chains can reconfigure the supply chain networks or reschedule the production within the manufacturer’s plants until the required performance target is achieved. Related to value chain re-engineering, this APS model changes the one direction of the value chain to a two-way concept (see Fig. 6) by producing collaboration with both customer and supplier involvement. This collaboration is shown by incorporating suppliers into product platform design (Sec. 4.2.4) and demand forecasting (Sec. 4.2.3). APS supports the integration process effectively. The ATP module is also embedded into master planning where it also receives information from the product configuration database (customer side), which can be used to select
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
185
sourcing strategy; thus it also reduces delivery uncertainty because of supplier commitment. Last but not least, embedding production reconfiguration into distribution and transport planning is a good idea since it has two advantages. The first advantage is that the customer side can reconfigure product structure by considering lead time. This step is possible since ASDN will measure the total lead time at the final simulation. The second advantage is that manufacturers and suppliers can reconfigure their production process by optimizing the manufacturing schedule and reconfigure the push-pull manufacturing strategy. This is the main advantage of value chain re-engineering.
6. Conclusion and Future Research This chapter has discussed value chain re-engineering, which is represented by a new APS model. We may summarize the results derived from the model as follows. 1. Supply chain collaboration needs to be addressed in the value chain discussion. The value chain cannot be managed solely based on optimization in one direction. In fact, both the supply and demand sides must be considered equally. 2. Technological support and procurement activity need to be involved in the main activities of the value chain. Procurement should have a strategic position in the business activities. Furthermore, in mass customized products, a short product life cycle forces the supply chain to be agile and reconfigurable. 3. The first limitation of this APS is that the model does not incorporate customer and service department interface because of the assumption that the sales department is replaced by e-marketing. On the other hand, this situation has the advantage of offering a new future research direction with regard to the possibility of shrinking the organization by diminishing the sales department (see Fig. 6) and changing the firm sales to mass personalization. 4. The second limitation is that there are no solutions to support the function of sales mode. It is necessary to conduct future research on the personalization of sales function by employing information technology to give added value to the APS.
References Anton, JJ and DA Yao (1989). Split awards, procurement and innovation. RAND Journal of Economics, 20(4), 538–551. Buxmann, P and W K¨onig (2000). Inter-organisational Co-operation with SAP Systems: Perspectives on Logistics and Service Management. Berlin: Springer-Verlag. Chen, K and P Ji (2007). A mixed integer programming model for advanced planning and scheduling. European Journal of Operations Research, 184, 512–522.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
186 Y. Kristianto et al.
Davis, SM (1987). Future Perfect Reading, MA: Addison Wesley. Du, X, J Jiao and M Tseng (2003). Identifying customer need patterns for customization and personalization, 14(5), 1–25. Elmaghrabi, WJ (2000). Supply contract competition and sourcing policies. Manufacturing and Services Operations Management, 2(4), 350–371. Fleischmann, B, H Meyr and M Wagner (2002). Advanced planning. In Supply Chain Management and Advanced Planning: Concepts, Models, Software and Case Studies, Stadtler H and Kilger C (eds.), 71–95, 2nd Edn. Berlin: Springer–Verlag. Forker, LB and P Stannack (2000). Cooperation versus competition. Do buyers and suppliers really see eye-to-eye? European Journal of Purchasing and Supply Management, 6, 31–40. Gibbons, R (1992). A Primer in Game Theory. New York: Harvester Wheatsheaf, 1992. Holweg, M, S Disney, Holmstr¨om and J Sm¨aaros (2005). Supply chain collaboration: Making sense of the strategy continuum. European Management Journal, 23(2), 170–181. Kilger, C and L Scheeweiss (2002). Demand Fulfillment and ATP. In Supply Chain Management and Advanced Planning: Concepts, Models, Software and Case Studies, Stadtler H and C Kilger (eds.), 161–171, 2nd Edn. Berlin: Springer–Verlag. Kumar, A (2008). From mass customization to mass personalization: A strategic transformation. International Journal of Flexible Manufacturing Systems, 19, 533–547. Meyr, H, J Rohde, L Schneeweiss and M Wagner (2002). Structure of advanced planning system. In Supply Chain Management and Advanced Planning: Concepts, Models, Software and Case Studies, Stadtler H and Kilger C (eds.), 99–104, 2nd Edn., Berlin: Springer–Verlag. Patterson, JL, LB Forker and JB Hanna (1999). Supply chain consortia: The rise of transcendential buyer-supplier relationships. European Journal of Purchasing and Supply Management, 5, 85–93. Pine, J (1993). Mass Customization. Boston, Massachusetts: Harvard Business School Press. Porter, M (1985). Competitive Advantage: Creating and Sustaining Superior Performance. New York: The Free Press. Singh, N and X Vives (1984). Price and quantity competition in a differentiated duopoly. Rand Journal of Economics, 15, 546–554. Stadtler, H, (2002a). Supply chain management — an overview. In Supply Chain Management and Advanced Planning: Concepts, Models, Software and Case Studies, H Stadtler and C Kilger (eds.), 7–29, 2nd Edn. Berlin: Springer–Verlag. Stadtler, H (2002b). Production planning and scheduling. In Supply Chain Management and Advanced Planning: Concepts, Models, Software and Case Studies, H Stadtler and C Kilger (eds.), 177–195, 2nd Edn. Berlin: Springer–Verlag. Stadtler, H (2005). Supply chain management and advanced planning-basic, overview and challenges. European Journal of Operations Research, 163, 575–588. Van Eck, M (April 2003). Advanced planning and scheduling. BWI Paper, page 3. http://obp.math.vu.nl/logistics/papers/vaneck.doc, [Accessed: 2 January 2008]. Vesanen, J (2007). Commentary: What is personalization? A conceptual framework. European Journal of Marketing, 41(5/6), 409–418.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
187
Bibliographical Notes Yohanes Kristianto obtained an undergraduate degree in Chemical Engineering and a master degree in Industrial Engineering from Sepuluh Nopember Institute of Technology, Surabaya, Indonesia. Prior to his academic career, he worked for a Quality function of a multinational company. He is now doctoral researcher at Logistics Sytems Research Group, Department of Production, University of Vaasa, Finland. His research interests are in the area of supply chain strategy/management and production/operations management. His papers have been published in several international journals. Petri Helo is a research professor and the head of Logistics Systems Research Group, Department of Production, University of Vaasan Finland. His research addresses the management of logistics processes in supply demand networks, which take place in electronics, machine building, and food industries. He has published many papers in several international journals. Mian M. Ajmal is a doctoral researcher at the Logistics Systems Research Group, Department of Production, University of Vaasa, Finland. He holds an MBA. He has been involved in several research projects in last few years. His research interests pertain to project management, supply chain management, and knowledge management. Previously, he has published some articles in international journals and international conferences in these areas.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
Chapter 8
Cultural Auditing in the Age of Business: Multicultural Logistics Management, and Information Systems ALBERTO G CANEN∗ and ANA CANEN† ∗ Department of Production Engineering, COPPE/Federal University of Rio de Janeiro, Caive Postal 68507, 21941-972 Rio de Janeiro RJ, Brazil
[email protected] † Department of Educational Studies, Federal University of Rio de Janeiro, AV Pasteur 250 (Fundus), 22290-240 Rio de Janeiro RJ, Brazil
[email protected]
The present chapter seeks to understand in what ways cultural auditing could represent a process whereby an evaluation could take place that could help improve multiculturalism in organizations, logistics management, and information systems. It suggests that information and business systems could benefit by considering cultural diversity and its impact on organizational success. In fact, cultural auditing has been pointed out as a way to neutralize cultural conflicts. In order to develop the argument, this chapter first discusses the concept of cultural auditing; it analyzes the extent to which literature concerning cultural auditing takes on board multicultural concerns; it then gauges if and how cultural auditing has been perceived, through oral history in an auditing organization in Brazil. It concludes, by pinpointing the possible ideas and ways for cultural auditing in a multicultural context. Keywords: Cultural auditing; logistics management; information systems; multicultural organizations; multiculturism; oral history.
1. Introduction Organizations as multicultural entities (Canen and Canen, 2005) should respond to cultural diversity in a contemporary and an increasingly plural world. Authors like Cox Jr. (2001) argue that managing diversity means to understand “its effects and implementing behaviors, working practices and policies that respond to them in effective way” (p. 4). In fact, understanding the cultural points of views of customers and partners may represent the difference between success and failure. On the other 189
March 15, 2010
190
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
A. G. Canen and A. Canen
hand, considering cultural diversity also at the level of the organization should help build a climate open to cultural plurality, transparency and trust, with a result in the flourishing of the organization itself. Canen and Canen (2005) stress the fact that a multicultural organization can be considered as one that values cultural diversity and fosters the collective construction of a strong organizational cultural identity. At the same time, Canen and Canen (2008a) also argue that leadership is crucial to ensure a multicultural dimension in organizational structures and practices. They point out the dangers of a monocultural leadership in disrupting organizational performance and eventually being even conducive to cases of bullying at the workplace. Taking into account the interconnectedness of business and information systems, authors like Gillam and Oppenheim (2006) contend that virtual teams — understood as groups of people who work across time, space, and often organizational boundaries using interactive technology — need effective communication strategies in order to be successful and tap on their potential for generating and sharing management knowledge. The same authors contend that “intercultural teams are bound to have definite effects on managerial and leadership styles” (p. 167), stressing the need for cross-cultural management associated with information and business systems. They claim that even though information technology and the advent of the web have revolutionized the business model, there is a strong need to analyze “managerial and cultural issues (all too often ignored) that arise from their use” (p. 161). In the same vein, Weideman and Kritzinger (2003, p. 76) stretch the argument further, by stating that “cultural diversity and technology are interrelated in today’s workplace . . . cultural dividing factors . . . are claimed to be the reason for the lacking of skills required to use the latest technology.” The referred authors illustrate their argument by developing a study in which preferences for different patterns of technological interfaces were largely associated with plural identity markers on the lines of gender and race, among others. These views seem to support the argument that IT and business management should take cultural diversity into account in order to represent real assets to the organization. They seem to strongly suggest that it is not enough to merely emphasize technological progress and capabilities if the culturally plural human agents are not taken on board. In line with that, mechanisms of cultural auditing that could evaluate the weight of cultural variables in the organizational strategies and performances should be central. Building on these ideas, the present chapter — which is an extended version of Canen and Canen (2008b) — focuses on cultural auditing, taking into account the contemporary era of business and IT. It suggests that a lot has been done related to multicultural discussion, but much less in discussing the meaning of cultural auditing in order to evaluate those issues in the organizations. In line with that, a mechanism of cultural auditing should be in place in order to ensure logistics management and organizational evaluation and planning in a multicultural perspective. In a globalized and technological era, cultural auditing should give objectivity and the right weight to this dimension.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
Cultural Auditing in the Age of Business 191
In order to develop the argument, cultural auditing is discussed from a theoretical perspective, and from oral history in an auditing organization in Brazil. We also pinpoint a possible framework for cultural auditing in a multicultural context. It should be borne in mind the present chapter is part of the authors’ research agenda. Therefore, we do not claim that the ideas presented here are to be generalized, to the contrary: they are open to discussion. 2. Multicultural Organizations, Evaluation, and Cultural Auditing: Transforming Ideals into Practice Multiculturalism can be considered as a set of answers to cultural diversity so as to build on it for organizational success and for the challenging of prejudices. Preparation of managers in a multicultural perspective is pointed as crucial (Canen and Canen, 2001), the role of training and education being vital in the process. In the context of schools, Brown and Conrard (2007) posit that multicultural leadership has to do with the development of cosmopolitan personalities. At the same time, the dangers of monocultural leaders in condoning professional harassment and bullying in the workplace has been pointed out (Canen and Canen, 2008a), with serious consequences for organizational performance. However, even though many organizational leaders may truly believe their organizations respond adequately to cultural diversity, there seems to be a powerful need to evaluate the extent to which that idea is true. Cultural auditing could represent a possible evaluation tool, even more important than financial auditing, particularly in the challenging processes of fusions, as highlighted by Canen and Canen (2002). As claimed by Radler (1999), 80% of mergers/acquisitions fail due to cultural incompatibilities. Carleton (1999) suggests that the objective of cultural auditing is to elaborate a plan that should manage the organizational cultural differences, mainly in the process of fusions. As pointed out by Castellano and Lightle (2005), “a cultural audit would provide a means for assessing the tone at the top and the attitude toward internal controls and ethical decision-making” (p. 10). Fletcher and Jones (1992) make the point that cultural auditing should not evaluate organizational culture according to any previous conception. Rather, it should be supported by norms and rituals that have a decisive influence on the global ability of the organization in dealing with its changes. In the same vein, Bardoel and Sohal (1999) point out the role of leadership, stressing that the input of senior management and their involvement is crucial for the success of the cultural change. We suggest cultural auditing in a multicultural context should comprise the factors as pointed out in Fig. 1. As it can be noted in Fig. 1, the core of cultural auditing is multicultural leadership (inner circle), which, in its turn, has a strong impact on the monitoring of logistics management and organizational life in a way that shows to what extent solutions are given to cultural conflicts. This leads to the monitoring of respect to cultural diversity and, finally, to understanding the extent to which a shared vision
March 15, 2010
192
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
A. G. Canen and A. Canen multicultural leadership
organizational life solutions to cultural conflicts respect to cultural diversity shared vision
Figure 1.
Cultural auditing in a multicultural perspective.
of organizational cultural identity based on mutual respect, motivation, cohesion, and the understanding of cultural differences is in place. Wright (1986), citing Campbell, considers organizational culture as the set of “attitudes, values, beliefs, and expectations of those that work in an organization” (p. 28). Fletcher and Jones (1992) classify organizations according to their culture, stating there is “no ideal culture as different cultures are appropriate in different contexts” (p. 31). In that trend, organizational cultural identities are to be categorized in order to be better understood. Asma Abdullah, interviewed by Schermerhorn (1994), contends that typical organizational problems center on “different perceptions of how work should be done.Very often a new expatriate is not quite sure how to get the most from his national subordinates” (p. 49), which reinforces the need to understand cultural plural views. Therefore, cultural auditing is a process that should go beyond cultural organizational changes, but incorporate an ongoing perspective that highlights its cultural elements. In line with all that, it should be emphasized that logistics management in a multicultural perspective should be part of cultural auditing. In fact, logistics costs represent a high percentage of the country’s GDP. Logistics and cultural diversity should go hand-to-hand for organizational success (Canen and Canen, 1999), therefore being incorporated into cultural auditing in a multicultural perspective. 3. Cultural Auditing in a Real Life Auditing Company Based on the above, the present study sought to understand how cultural auditing has been perceived by organizations. In order to do that, a case study has been developed
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
Cultural Auditing in the Age of Business 193
in one of the most prestigious auditing companies, with the site in Brazil. The focus, of the present chapter, is on the interview carried out with a top executive of that company. The interview sought to understand to what extent cultural auditing has been perceived as relevant. Qualitative methodology has been chosen as the research methodology because it provides opportunities for gauging cultural perceptions that inform the everyday life of an institution. Therefore, by focusing on oral history as gleaned from the interview with a top executive of the company, the study should provide a glimpse of perceptions and cultural views that underlie the organizational identity. The interviewee explained that auditing companies have existed since the 18th century. The Dutch were the first ones, their country being small but with a strong international presence including in Brazil. The interviewee posited that the profession of auditing has always been more international and now it is becoming more formalized, “since everything today should be regulated.” The interviewee also makes a distinction between “necessarily regulated activities,” among which he cites banks and ensuring companies; and the others, which are companies that do not develop “obligatory regulated activities,” but wish to be audited, without any imposition to do so. He cited the example of Company N, that undertakes its auditing process without any regulating imposition from outside. The last ones are audited because “they want to, which means that they realize they need it, something must have changed within them” (from the interview carried out in October 2007). From this set of answers, even though the expression “organizational culture” has not been mentioned, it seems more likely that it is taken into account in the cases of voluntary auditing, particularly when the interviewee expresses that “something must have changed within them”. This indirectly seems to raise the possibility of cultural auditing in the sense defended by Bardoel and Sohal (1999), which emphasize its role in assessing the impact of cultural change in the organization. It is interesting to note that, according to the interviewee, there are four large auditing firms in the world, and they are referred to as “the big four”. The fact that Company E failed seems to have been a big blow. Even though the interviewee did not explicitly mention Company E’s culture, that factor seems to be clear when he asserted that: It is shocking to see the arrogance that prevailed in Company E by then. Their motto was “ask why.” However, nobody asked why during the last period, because things had really got rotten! What finished with them was their behavior. They started to burn all files, papers in tons, and that really finished with them. Their behavior made them implode! They even managed to get a favorable sentence in court at a certain point, but that was not enough. They really were finished by then. Therefore, there were five big companies, now there are only four . . . We are all very sensitive indeed to these aspects, because if anything of that kind happens, four may become three . . . (from the interview, October 2007).
March 15, 2010
194
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
A. G. Canen and A. Canen
It is important to note from the above answer the suggestion that the lack of credibility and ethical behavior in leadership management that was the biggest factor in the breaking down of the mentioned organization. Values defended by authors like Canen and Canen (2005, 2008b), Brown and Conrard (2007) related to multicultural leadership, ethical behavior, communication, credibility, transparency, and others seem to be perceived by the interviewee, albeit in an indirect way, to have been crucial in the downfall of Company E, more than any financial or judicial factors. It is noteworthy that when explicitly asked whether cultural auditing is employed in the auditing company taken as the case study, the answer of the interviewee was negative. However, when talking about Company E, as above, as well as about the auditing process itself, cultural aspects are bound to appear. For example, even though he posited that the culture of auditing is dictated by the market legislation, he recognized that: When talking about auditing we are talking about a large range of companies, ranging from open ones to little ones. Therefore, the rules and standards to be applied by companies are bound to be different. However, I do not think it is cultural, but rather a way of applying risk norms, that could hinder wrong procedures. That was very much upgraded after the Company E case (from the interview, October 2007).
It therefore seems to be clear that the concept of cultural auditing and of organizational culture, as such, is not applied in the auditing process. However, implicitly rather than explicitly, the cultural diversity of organizations due to their characteristics, size, and mission certainly seems to have a strong weight on the way they perform, having to be adapted to that diversity. The cultural aspects and the need to involve personnel in the organizational culture were again implicit in the discussion of about the possible clashes of cultures in processes of merger: The problem with mergers is the behavior of people that work in the companies. The way one company may conduct its business may be highly bureaucratic, whilst the other is in a free atmosphere . . . When we ourselves made our merger the profile of people were very different. Therefore the secret of our type of business — which has to deal with delivery of services, client services — is to let people live with each other. However, sometimes it may be a disaster indeed, because when there are mergers, it is not important if one is in a top position, when the merger is carried out, that position may be inverted . . . But the question of respect is paramount. However, people of that sort are fighting for their lives, some of them will fight one way or the other. There are those that are real fighters, others that are na¨ıve, others that like to look good but indeed are not good at all. But the level of sensitivity to these things is higher nowadays . . . In my view, the best way to tackle this is by having sensitivity. However, some companies do not have that sensitivity at all . . . (from the interview, October 2007).
As it can be noted, aspects related to organizational culture similar to what Cox Jr. (2001) suggests are mentioned when the interviewee talked about a bureaucratic and more liberal atmosphere. In that case, a relativistic approach seems to
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
Cultural Auditing in the Age of Business 195
be present, in that the culture of the organization is not put into question, and the multicultural aspects are not focused on beforehand, in the way Fletcher and Jones (1992) understand. Nevertheless, the fact that behaviors and sensibility to them should be fostered, as claimed by the interviewee, seems to implicitly confirm the need to see beyond economic factors and probe into cultural ones in order for the organizations to succeed. Finally, when asked about the main difficulties and the main successful aspects involved in auditing, the interviewee noted that: The main challenge is to answer to the enormous amount of demands upon us, in terms of documentation and procedures. We and the rest of the world are working towards a “limitation liability” in our work. England and the United States already have what they call the “liability gap.” Society, the public, the clients and others accumulate a lot of demands upon us. From the practical point of view, difficulties relate to resources in order to face our growth. We have almost doubled our initial size. We admit trainees, young people that still are in university, and we subside their studies apart from an initial salary. We are very open, there is a positive climate here, we hold parties and other events, and there is no restriction in terms of access to directors. Our success is therefore our contribution to the society, we also have social projects, and we have been receiving prizes for those. (from the interview, October 2007).
From the above, it seems to be clear that even though the emphasis of the discourse seems to be on market and bureaucratic challenges, the cultural aspect emerges when the interviewee talks about the organizational climate that is present in the auditing company — described as open, happy, and transparent, with an easy accessibility to all the echelons in the organizational hierarchy. The challenge seems to be for organizations themselves to take on board cultural auditing so that the inner cultural variables — related to aspects such as multicultural leadership, logistics management, and organizational climate — will be efficiently tackled, in order to provide the basis for an increasingly successful organizational performance. 4. Conclusions The present chapter has discussed ways in which cultural auditing could represent a process whereby cultural conflicts could be pinpointed and indeed addressed in organizations. It highlighted the interconnectedness of cultural diversity and technology in the workplace, stressing the importance of effectively dealing with it for the success of business and information managerial systems. It argued that such a cultural auditing could represent an evaluation process focused on indicators related to multicultural leadership, positive organizational climate and their effect on logistics management and organizational success, and it presented theoretical perspectives concerning the issue. Concerning the field work, cultural auditing does not seem to be the focus of the auditing carried out in the company taken as a case study. In fact, multiculturalism is not a part of the dialog of that firm. However,
March 15, 2010
196
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
A. G. Canen and A. Canen
implicitly, the cultural aspects emerge within the discourse of the top executive interviewed, imposing themselves in the analysis of organizational successes and challenges. It is important to note that those aspects emerged throughout the interview, as for example when talking about respect, an open climate, and other aspects mentioned by the interviewee. Based on that, the present study is a part of the authors’ research agenda, and should help open up discussions concerning the importance of cultural auditing for logistics management and organizational performance. It should also contribute to the thinking of IT by taking multicultural aspects into account. Their success is certainly more likely when the organizational environment is that of a nurturing, respectful one in which all — regardless of race, ethnicity, social class, gender, and other identity markers — feel valued.
References Bardoel, EA and AS Sohal (1999). The role of the cultural audit in implementing quality improvement programs. International Journal of Quality and Reliability Management, 16(3), 263–276. Brown, L and DA Conrard (2007). School leadership in Trinidad and Tobago: The challenges of context. Comparative Education Review, 51(2), 181–201. Canen, AG and A Canen (1999). Logistics and cultural diversity: Hand in hand for organisational success. Cross Cultural Management: An International Journal, 6(1), 3–8. Canen, AG and A Canen (2001). Looking at multiculturalism in international logistics: An experiment in a higher education institution. The International Journal of Educational Management, 15(3), 145–152. Canen, AG and A Canen (2002). Innovation management education for multicultural organisations: Challenges and a role for logistics. European Journal of Innovation Management, 5(2), 73–85. Canen, AG and A Canen (2005). Organiza¸co˜ es Multiculturais: log´ıstica na corpora¸ca˜ o globalizada. Rio de Janeiro, Editora Ciˆencia Moderna. Canen, AG and A Canen (2008a). Multicultural leadership: The costs of its absence in organizational conflict management. International Journal of Conflict Management, 19(1), 4–19. Canen, AG and A Canen (2008b). Cultural auditing: Some ways ahead for multicultural organisations and logistics management. In: Menipaz, E, I Ben-Gal and Y Bukchin, (Eds.), Book of Proceedings, International Conference on Industrial Logistics, Tel-Aviv, Israel. Carleton, R (1999). Choque de Culturas. HSM Management, (14), May/June. Castellano, JF and SS Lightle (2005). Using cultural audits to assess tone at the top. The CPA Journal, 75(2), 6–11. Cox Jr, T (2001). Creating the Multicultural Organization. San Francisco: Jossey-Bass, A Wiley Company.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
Cultural Auditing in the Age of Business 197
Fletcher, B and FF Jones (1992). Measuring organizational culture: The cultural audit. Managerial Auditing Journal, 7(6), 30–36. Gillam, C and C Oppenheim (2006). Review article: Reviewing the impact of virtual teams in the information age. Journal of Information Science, 32(2), 160–175. Radler, J (1999). Incompatibilidade cultural inviabiliza fus˜ao entre Empresas. Gazeta Mercantil, 14 December. Schermerhorn, JR (1994). Intercultural management training: An interview with Asma Abdullah. Journal of Management Development, 13(3), 47–64. Weideman, M and W Kritzinger (2003). Concept Mapping vs. Web Page Hyperlinks as an Information Retrieval Interface- preferences of postgraduate culturally diverse learners. Proceedings of SAICSIT, pp. 69–82. Wright, P (1986). A Cultural Audit: first step in a needs analysis? JEIT, 10(1), 28–31.
Biographical Notes Alberto G Canen is a Professor in the Department of Production Engineering at COPPE/Federal University of Rio de Janeiro. He is a Researcher for the Brazilian Research Council (CNPq). He was formerly a Visiting Professor at the University of Glasgow. He is a former President of the Brazilian Operations Research Society (SOBRAPO). He has a wide experience working in industrial organizations, as well as being a consultant. Ana Canen is a Professor in the Department of Educational Studies at the Federal University of Rio de Janeiro. She is a Researcher for the Brazilian Research Council (CNPq). She has also actively participated in long distance education programs. Her main research interests have focused on comparative and multicultural education and institutional evaluation.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Chapter 9
Efficiency as Criterion for Typification of the Dairy Industry in Minas Gerais State LUIZ ANTONIO ABRANTES∗ , ADRIANO PROVEZANO GOMES∗∗ , ´ MARCO AURELIO MARQUES FERREIRA† and ‡ ˆ ´ ANTONIO CARLOS BRUNOZI JUNIOR Department of Administration, Federal University of Vi¸cosa, CEP: 36.570-000, Vi¸cosa, Minas Gerais, Brazil ∗
[email protected] ∗∗
[email protected] †
[email protected] ‡
[email protected] MAISA PEREIRA SILVA Student in Administration, Federal University of Vi¸cosa, CEP: 36.570-000, Vi¸cosa, Minas Gerais, Brazil
[email protected]
Milk production is considered as the strategic activity in the national economy, as it is an important generator of foreign exchange and employments. The increased domestic competition associated with the globalization process of the markets required higher competitiveness and better performance of the organizations relating to the management of their activities. To avoid market loss or even to guarantee survival, these organizations have constantly been looking for ways to improving their performance. This study was carried out to typify the dairy industries in Minas Gerais state, in relation to their technical performance, by focusing on the socioeconomic, financial, and administrative aspects. A total of 142 dairy industries were analyzed, and the data envelopment analysis was used to measure their performance. It was verified that only 10 industries reached the maximum technical efficiency. In the cases of those showing inefficiency, it was observed that the main problem was not the incorrect production scale, but the inefficiency in using the inputs. Keywords: Efficiency; agrobusiness; path envelopment analysis.
1. Introduction Milk production is considered a strategic industry in the national economy, as it is an important generator of foreign exchange and employment. Minas Gerais is the biggest producer in Brazil, as it achieved 27.92% of the national production in 2007, according to the Instituto Brasileiro de Geografia e Estat´ıstica (IBGE). 199
March 15, 2010
200
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
From the 1990s, the changes occurring in this sector due to government intervention were decisive for the current state. By imposing a new profile on the agroindustrial milk complex, these changes were marked by external factors such as the intensification of either globalization or the process of formation and consolidation of economical blocks, as well as internal factors such as the deregulation of the sector starting in 1991 and reduced government intervention for the imported products, which occurred through reduction of both quotas and non-tariff barriers. In addition, the increased domestic concurrence began to require higher competitiveness and better performance in the management of the organizations’ activities. It became necessary to have perfect understanding about market structure, where it is competing as well as its correct positioning in such a way as to ensure a sustainable competitive advantage. To avoid market loss or even to guarantee survival, those organizations have been constantly looking for means to improve their performance. In permanent progress of tactics to improve the relationship with suppliers and consumers, to optimize resources, to increase productivity, and to reduce costs are essential practices, especially when considering that the organizations operate within a macroenvironment that can be affected by tendencies and political, legal, economic, technological, and social systems. Thus, it is well known that a satisfactory performance not only depends on the internal effort of the company, but also on its capacity to innovate, to modernize, to position itself, and to adapt to the pressures and challenges of the competition with regard to the environmental, social, cultural, technological, economical, and financial aspects. Indeed, the company is not an isolated link within this context, where the competitiveness of its product can be significantly affected by both the productivity and efficiency of the several economic agents who directly or indirectly participate in the production chain. In this aspect, the Brazilian agroindustrial milk complex involves a long chain that extends from the industry of inputs to the national and international retail levels. The dairy industry, composing of the transformation segment, is responsible for the industrialization of milk and its derivatives, to supply society with a wide variety of products for final consumption. With an ample part of its activity directed at internal consumption, this segment is strongly affected by the performance of the national economy, employment levels, interest rate, and mainly by the price of raw materials. Thus, knowledge of the current reality at macro- and microenvironmental levels is an indispensable factor in the construction of sectorial analyses, as understanding of the industrial importance is only reached when it becomes possible to contextualize the amplitude of the environment in which it is competing. In this scenario, knowledge of the strategic posture of the segment, the portfolio of its products, and the competitive forces guiding the sector is essentially important to face the competition and to ensure its capacity to survive and expand in the long
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
201
term. Furthermore, short-term policies such as the payment capacity related to the administration of the circulating capital, related to the policies of receipts and payments accomplished by the company, are essential in determining the liquidity and continuity of the business activity. In this aspect, customers, suppliers, and stock are important components involved in the operational and financial cycle of the company. The customers participate in this cycle via their payment capacity and the credit policies accomplished by the company, the suppliers via their financing capacity and the stocks that will change into results and have high dependence from turnover. All these factors will interfere in short-term cycles which will have repercussion in the final results, by taking into account the direct relationship in the final operational results and the formation of other expenses that will affect the net result. So, it is observed that the performance of any organization does not only depend on the company’s internal effort, but also on its capacity to innovate, modernize, position, and adapt in order to answer to the pressures and challenges of the competition in front of the environmental, social, cultural, technological, economical, and financial aspects. The cautiuous improvement of tactics to improve relationship with suppliers and consumers, to optimize resources, to increase productivity, and to reduce costs is an essential practice for the achievement of competitiveness and to reach the desired scale of production, as well as efficiency in using the production factors. In this aspect, one question arises: what is the technical efficiency level of the dairy industry in Minas Gerais State? Thus, the central objective of this work is the typifying of dairy industries in Minas Gerais State in relation to their technical performance, by focusing on the socioeconomic, financial, and management aspects. A more specific intention is the following: (a) To measure the performance of the dairy industries, as based on either technical efficiency or the scale measures. (b) To identify and quantify the influence of the variables related to the socioeconomic, financial, and management aspects in the dairy industries’ technical efficiency. To answer this question, the present research was proposed, by taking into account the capital societies and cooperatives with annual gross revenue above R$1,200,000.00 in Minas Gerais State, Brazil. 2. Theoretical Reference 2.1. The Importance of the Milk Agrobusiness Brazil is distinguished as one of the major producers of milk in the world, and it was the sixth largest producer in the world in 2006. With a total market share of
March 15, 2010
202
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
4.6% of worldwide production, the country is behind only the United States, India, China, Russia, and Germany (Embrapa Gado de Leite, 2007). As one of the largest global milk producers, the dairy sector and the agroindustrial complex of milk represent great socioeconomic importance to the country. In Brazil, milk production is distinguished as one of the main agriculture and livestock activities because of its ability to generate employment and income as well as its connection with other agroindustry sectors. Its socioeconomic importance is highlighted by the position it occupies in the Brazilian agrobusiness; it is among the main sectors that generate national income and tax revenue. The economical importance of this sector can also be verified by the position it occupies in the Brazilian agrobusiness, as it is among the main products in terms of generating national income and tax revenue. In 2007, it occupied the sixth place in the ranking of gross value of the national agricultural production, losing to cattle meat, soybean, sugarcane, chicken, and corn. The inter-connectiveness property of the industrial sector is also distinguished because it demonstrates great relationship with other sectors of the economy, therefore being a key sector in the national economic development process. In Minas Gerais’ economy, the data of the Confedera¸ca˜ o Nacional da Agricultura (CNA) show the prominent participation of milk in the gross value of agriculture and livestock, as it represents the second place among the main products in 2006. In 2006, the gross revenue of coffee and milk products in Minas Gerais totaled R$5.6 and R$3.5 billion, respectively, contributing 26.64% and 16.89% of the total gross revenue (Table 1). Milk production is mainly characterized by its presence in all states of the federation, although half of the national production is concentrated in only three states, with Minas Gerais as the largest national producer at 27.924% of the national production, followed by Rio Grande do Sul, with 14.087% and S˜ao Paulo with 12.481%. Among the industrial parks of national production and processing, the southeast region is distinguished by the concentrated production that reaches 7.8 billion L/year, totaling 43.78% the national total (Table 2). In Minas Gerais, it is relevant to identify the presence of milk production in 89.6% of counties, therefore occupying a prominent position in the composition Table 1. Gross Revenue of the MainAgricultural and Livestock Products in Minas Gerais State in 2006. Products Green coffee Milk Cattle meat Corn Soybean
R$ Millions
% Participation
5.627 3.567 3.478 1.425 1.050
26.64 16.89 16.46 6.75 4.97
Source: Adapted from FAEMG (2007).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
203
Table 2. Raw or Cold Milk Acquired in the Year 2007 in the Country and in the Federation Units. Country and federation units Brasil Rondˆonia Acre Amazonas Roraima Par´a Tocantins Maranh˜ao Piau´ı Cear´a Rio Grande do Norte Para´ıba Pernambuco Alagoas Sergipe Bahia Minas Gerais Esp´ırito Santo Rio de Janeiro S˜ao Paulo Paran´a Santa Catarina Rio Grande do Sul Mato Grosso do Sul Mato Grosso Goi´as Distrito Federal
In 1000 L
% Participation
17,836,363 691,756 11,786 814 205 283,723 112,216 62,466 19,741 152,770 79,415 46,969 201,857 117,209 72,152 286,097 4,980,602 210,061 392,833 2,226,172 1,473,891 1,084,314 2,512,687 225,169 414,704 2,159,971 16,786
100.000 3.878 0.066 0.005 0.001 1.591 0.629 0.35 0.111 0.857 0.445 0.263 1.132 0.657 0.405 1.604 27.924 1.178 2.202 12.481 8.263 6.079 14.087 1.262 2.325 12.11 0.094
Source: IBGE, Trimestrial research of milk (2007).
of the milk areas of the country, in the location of most industries of the dairy products and in the largest consumption center. The nine mesoregions located in this state rather represent 23.6% of the national production and 87.8% of the state production. With regards to revenue, among 12 large companies found in Brazil, five are located in Minas Gerais (Embrapa Gado de Leite, 2007). According to the Instituto de Desenvolvimento Integrado de Minas Gerais (INDI) (2006), the supremacy of the state is mainly due to some factors such as excellent conditions of the climate and soil; strategic geographical location of the consumption centers; tradition; experience in the livestock exploration; and governmental support to the entrepreneurs of the segment. Because of its leading position in national production, Minas Gerais has a very heterogeneous industrial park with very different realities. In one extreme are
March 15, 2010
204
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
the largest and more modern companies of the country such as Nestl´e, Danone, Itamb´e, Cotoch´es, Barbosa & Marques, and Vigor. In the other extreme are the small companies with reduced working productive capacities; they lack the basic conditions of industrialization and competitiveness, and sell their product which are of doubtful quality to the market. The modern companies use advanced technology in all stages of the productive chain: they have production scale, human resources, high-quality products with competitive prices. They operate in high value-added segments such as milks (fermented, sterilized, condensed, powdered, evaporated) and lacteous desserts, ice creams, and fine cheeses. On the other hand, the companies presenting small production scale are those operating in less sophisticated sectors (traditional cheeses, C-type pasteurized milk, milky sweets, and butter). Besides using ancient techniques, they need human resources, lack diversification, and face difficulties in selling their products in the market. According to the Secretaria de Estado de Agricultura, Pecu´aria e Abastecimento (SEAPA-MG) (2007), 70% are small producers with a daily production below 100 L. This institution calls attention to the great social representativeness of this milk segment. In Minas Gerais, the milk segment receives some 583.33 million liters a month, on average, and industrializes the pasteurized milk to creams, cheeses, yogurts, condensed products, desserts, and others. It is responsible for 1.2 million jobs, taking into account the producers, employees, and relatives, generates a revenue of R$ 6 billion/year that is distributed among approximately 900 dairy product companies (Governo de Minas Gerais, 2007). 3. Methodology 3.1. Obtainment of the Efficient Frontier — The DEA Approach The data envelopment analysis is a non-parametric technique that is based on the mathematical programming, specifically in the linear programming, to analyze the relative efficiency of producing units. In the literature concerning DEA models, a producing unit is a so-called decision-making unit (DMU), as a measure to evaluate the relative efficiency of taking-decision units proceeds from those models. The producing unit is any productive system that transforms inputs into products. According to Charnes et al. (1978), to estimate and analyze the DMUs’ relative efficiency, DEA uses the pareto-optimum definition, according to which no product can have its production increased without the increase of their inputs or reduction of other products or, alternatively, when no input can be reduced without the reduction of some product production. The efficiency is analyzed relatively among the units. Charnes et al. (1978) generalized the work by Farrell (1957) to incorporate both multiproduct and multi-input nature of the production, by proposing the DEA technique for the analysis of the different units concerning to their relative efficiency.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
205
Taking into account that there are k inputs and m products for each n DMUs, two matrices are constructed: the X matrix of inputs with dimensions (k × n) and the Y matrix of products with dimensions (m × n), as representing the data of all n DMUs. In the X matrix, each line represents an input and each column represents a DMU. In the Y matrix, however, each line represents a product and each column a DMU. For the X matrix, the coefficients must be non-negative and each line and each column must contain at least a positive coefficient, that is, each DMU consumes at least an input whereas a DMU consumes at least the input that is in each line. The same reasoning is applied for the Y matrix. So, for the ith DMU, the vectors xi and yi are represented for inputs and products, respectively. For each DMU, an efficiency measure can be obtained. This measure is the reason among all products and all inputs. For the ith DMU is DMU i efficiency =
u · yi u1 y1i + u2 y2i + · · · + um ymi = v · xi v1 x1i + v2 x2i + · · · + vk xki
(1)
where u is a vector (m × 1) of weights in the products and v is a vector (k × 1) of weights in the inputs. Notice that the efficiency measure will be a scalar due to the orders of the vectors composing it. The initial presupposition of this measure of efficiency is that it requires a common weight group that will be applied in all DMUs. However, there is some difficulty in obtaining a common group of weights to determine the relative efficiency of each DMU. This occurs because DMUs can establish different values for both inputs and products, and then adopt different weights. Thus, it is necessary to establish a problem that would allow each DMU to adopt the weight group that is better, in comparison with the other units. To select the optimum weights for each DMU, a mathematical programming problem is specified. The DEA model with input-orientation and presupposition of constant returns to scale searches the optimum weights to minimize the proportional reduction in the input levels, as well as maintaining the fixed amount of products. According to Charnes et al. (1978), this model can be algebraically represented by Min
θ,λ,S + ,S −
θ,
subjected to: −yi + Yλ − S + = 0, θxi − Xλ − S − = 0,
(2)
λ ≥ 0, S+
≥ 0,
S−
≥ 0,
where yi is a vector (m × 1) of product quantities of the ith DMU; xi is a vector (k × 1) of quantities of input of the ith DMU; Y is a matrix (n × m) of products of the n DMUs; X is a matrix (n × k) of inputs of the n DMUs; λ is a vector
March 15, 2010
206
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
(n × 1) of weights; S + is a vector of output slacks; S − is a vector of input slacks; and θ is a scaler that has values equal or lower than 1. The value obtained for θ indicates the efficiency score for DMU, that is, a value equal to 1 indicates that the DMU technical efficiency relative to the other ones, whereas a value lower than 1 evidences the presence of relative technical inefficiency. The linear programming problem shown in Eq. (2) is solved n times, as for each DMU, and as result, it presents the values of θ and λ. As mentioned, θ is the efficiency score of the DMU under analysis and, in the case that DMU is inefficient, the values of λ provide the “pairs” of this unit, that is the efficient DMUs that served as reference (or benchmark) for the inefficient DMU. To incorporate the possibility of variable returns to scale, Banker et al. (1984) proposed the DEA model with presupposition of variable returns to scale, by introducing a convexity restriction to the CCR model presented in LPP (Eq. (2)). The DEA model with input-orientation and presupposition of variable returns to the scale and presented in LPP (Eq. (3)) allows the decomposition of technical efficiency into scale efficiency and pure technical efficiency. To analyze the scale efficiency, it is necessary to estimate the DMU’s efficiency, by using either DEA model presented in LPP (Eq. (2)) or the one presented in LPP (Eq. (3)). The scale inefficiency is evidenced, when there are differences in the score of those two models. The DEA model with input-orientation, which presupposes variable returns to scale, can be represented by the following algebraic notation: Min
θ,λ,S + ,S −
θ,
subjected to: −yi + Yλ − S + = 0, θxi − Xλ − S − = 0, N1 λ = 1, λ ≥ 0, + S ≥ 0, S − ≥ 0,
(3)
where N1 is a vector (n × 1) of numerals 1. The other variables were previously described. This approach forms a convex surface with intersected planes, which involves the data under more compact way than the surface formed by the model with constant returns. So, the values obtained for technical efficiency, by presupposing variable returns, are higher or equal to those obtained with constant returns. This occurs because the technical efficiency measure obtained in the model with constant returns is composed either by technical efficiency measure in the model or with the scale efficiency measure variable returns on. The results supplied by the DEA models are complex and rich in details that, when used correctly, constitute an important auxiliary tool in the decision making
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
207
by the agents involved in the productive process. Due to this complexity, for more detailed descriptions of the methodology, some text books are recommended, such as Charnes et al. (1978); Coelli et al. (2005); Cooper et al. (2004, 2007); and Ray (2004). 3.2. Source and Treatment of the Data In this study, the references are the capital societies, companies, and cooperatives with annual gross revenue income above R$ 1,200,000.00 that are installed in the State of Minas Gerais and act in the dairy sector. The data used in this research were obtained from primary sources, by using a structured questionnaire that was applied via postal, contact by telephone or personal, from an intentional sample derived from the group of organizations acting in Minas Gerais’industry of dairy products. 70 cooperatives and 72 companies were contacted. To calculate the technical efficiency measures for the samples of the dairy industries, four variables were used, with three of them being representative of the inputs and the last one related to the product, as described below: • Inputs ◦ Payroll — the labor salary is computed by the annual cost of the payroll. By considering that the direct labor cost used in the productive process is aggregated to the other production factors and included under the form of Stocks or Cost of the Sold Products, the total of the Administrative Expenses with Personal was used as proxy for this factor. ◦ FixedAssets — composition of the permanent structure of the units composing the sample. ◦ Milk acquired — refers the daily average of milk liters acquired by the sample component units. • Product ◦ Revenue — the variable indicates the annual average earnings (in reais — R$) with the sale of the products made by the component units of the sample. 4. Result and Discussion 4.1. Analyzing the Efficiency of the Dairy Industry The DEA model was initially used, as presupposing constant returns to the scale, to obtain the technical efficiency measure for each dairy product of the sample, without considering the scale variations. Subsequently, the presupposition of constant returns to the scale was removed, by adding a convexity restriction that made it possible to obtain the efficiency measures in the paradigm of variable returns. With these two measures, it became possible to calculate the scale efficiency. Table 3
March 15, 2010
208
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
Table 3. Distribution of the Dairy Products According to Intervals of Measures of Technical Efficiency and Scale (E) Obtained in the Models Using DEA.
Specification
Technical efficiency constant returns (No. of dairy products)
Technical efficiency variable returns (No. of dairy products)
Efficiency of scale (No. of dairy products)
0 10 25 29 24 9 13 11 4 7 10 142
0 6 23 23 28 10 7 13 8 4 20 142
0 0 0 1 0 2 3 11 25 88 12 142
0.4951 0.2513 50.75%
0.5457 0.2714 49.73%
0.9169 0.1091 11.90%
E < 0.1 0.1 ≤ E < 0.2 0.2 ≤ E < 0.3 0.3 ≤ E < 0.4 0.4 ≤ E < 0.5 0.5 ≤ E < 0.6 0.6 ≤ E < 0.7 0.7 ≤ E < 0.8 0.8 ≤ E < 0.9 0.9 ≤ E < 1.0 E = 1.0 Total Efficiency measure Average Standard deviation Variation coefficient Source: Results of the research.
displays the results, and separates the dairy products according to the efficiency measures reached. Under the presupposition of constant returns to scale, it is verified that only 10 of those 142 dairy products of the sample obtained maximum technical efficiency. The average level of technical inefficiency is high, that is, around 0.5049 (1–0.4951). It is important to highlight that, by this relative approach, the DEA models with constant returns to scale are more conservative, as it usually results in a lower number of efficient DMUs, compared to the model of the variable return to scale. As the model with product-orientation and only one output (revenue) was used, the inefficiency of the company measures the possible product amount that can be expanded without the need for more inputs. In this case, the inefficient dairy products can, on average, expand some 50.49% of their revenue without the need for higher amounts of inputs. It is important to emphasize that the dairy products that reached maximum technical efficiency cannot expand their revenue without the introduction of more inputs. They are already at the efficient frontier. However, the other dairy products can still expand their revenue, until they reach a technical efficiency equal to one as reference.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
209
As the presupposition of constant returns was included, the sources of inefficiencies can include those proceeding from incorrect production scale. In other words, the total technical efficiency (constant returns) is composed either by the pure technical efficiency (variable returns) or by the scale efficiency. The technical inefficiency with variable returns effectively measures the excessive use of inputs, that is, it gives an idea about the company’s working power, in the case it was using their inputs correctly. The scale efficiency makes a projection about how much the company could gain if it was operating at optimum scale, in this case, with constant returns. The averages of the pure technical efficiency and scale efficiency are 0.5457 and 0.9169, respectively. This means that the inefficient dairy products were able, on average, to increase 45.43% of revenue by using the inputs correctly (without excess). If they were operating on the correct scale, they could increase their revenue by 8.31% without the need for more inputs. As one can observe, the problem of inefficient dairy products is not due to the incorrect scale of production, but the inefficiency in using the inputs, that is, there is proportionally higher waste of inputs than scale problems. Only 24 dairy products present pure technical inefficiency lower than 10%. On the other hand, 100 dairy products only show 10% or less inefficiency of scale. When analyzing the incorrect use of inputs, the data shown in Table 4 describes the current average situation of the companies and makes a projection of revenue, in the case where inefficient dairy products would correct their problems concerning the inadequate use of the inputs. As can be observed, despite having more employees and using more raw material, the average revenue of the efficient dairies is more superior to the inefficient ones. In this case, it can be said that the productivity of the factors is higher in products of the efficient dairies, that is, they produce much more proportionally despite using more production factors. The exceptions are the fixed assets. The efficient dairies have lower capital volume immobilized in the productive system, as a proportion to the inefficient dairies. This can be a signal about the incorrect scale of production in inefficient dairies. Table 4.
Product and Inputs of the Dairies Used in the Sample.
Specification Revenue Employees Milk reception Permanent Projected revenue
Unity
Efficient
Inefficient
Total
Thousand R$/year Person Thousand liters/day Thousand R$ Thousand R$/year
39,463 152.1 128.5 8,452 39,463
11,496 61.3 43.2 9,104 21,820
15,435 74.1 55.2 9,012 24,305
Source: Results of the research.
March 15, 2010
210
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
As there is no wastage of inputs in the efficient dairies, they will not succeed in increasing the revenue with the current amounts of input. This fact is reflected in the last line in Table 4, when the revenue is projected in the absence of pure technical inefficiency. There are no earnings in the projected revenue of the inefficient dairies. On the other hand, if the inefficient dairies correct their problems concerning the incorrect use of inputs, they can increase on average, 90% the revenue of the company. The increase in the revenue is significant. In some dairies’ products in the sample, the earnings could reach 300%. It is very important that the managers are aware of their companies’ situation, relative to the other, more efficient dairies. The wrong way that many dairies are using their input will hinder their performance in the market, because cost is one of the most important variables in competitive markets, as the organizations are usually price-takers. It is well-known that many companies have problems concerning their incorrect scale of production. To further this analysis, it is necessary to calculate the company’s scale efficiency. The scale efficiency measure is obtained by the ratio between the measures of technical efficiency in the models with either constant returns or the variable returns. If this ratio is equal to 1, the dairy is operating at optimum scale. Otherwise, if it is lower than 1, the dairy is technically inefficient because it is not operating at optimum scale. The dairies that are operating with constant returns to scale were not included into optimum scale of production, whereas those operating outside the range of constant returns to scale were not included in the optimum scale of production. In Table 3, it is observed that only 12 dairies do not present any scale problems. It is noticed that 10 out of the 12 dairies are at the frontier of constant returns; although operating at the range of constant returns, the other two are not located in the efficient frontier border, that is, they have problems concerning pure technical efficiency. Scale inefficiency can occur, when the dairy is operating below the optimum scale (increasing returns) or above the optimum scale (decreasing returns). If the dairy is below optimum scale, it can increase the production at decreasing costs, that is, the scale economy will occur. On the other hand, if it is above the optimum scale, the increased production will occur at increasing costs, that is, the dis-economy of scale will occur. To detect if the scale inefficiencies occur because the dairies operate between the range of increasing or decreasing returns, another linear programming problem was formulated, which ignores the restriction of non-increasing returns to scale. So, it was possible to distribute the dairies of the sample according to the return type and the degree of the pure technical efficiency, according to data shown in Table 5. In relation to the type of return, it is observed that most dairies (57%) present increasing returns. Only 10% are within the range of constant returns, that is, at optimum scale. When analyzing only the efficient ones, it is observed that 50% of
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
211
Table 5. Distribution of the Dairies According to the Return Type and the Degree of Pure Technical Efficiency. Return type
Efficient (%)
Inefficient (%)
Total (%)
Increasing Constant Decreasing Total
5 (3.52) 10 (7.04) 5 (3.52) 20 (14.08)
76 (53.52) 4 (2.82) 42 (29.58) 122 (85.92)
81 (57.04) 14 (9.86) 47 (33.10) 142 (100.00)
Source: Results of the research.
Table 6. Product and Inputs of the Dairies According to the Type of Return to Scale. Return type Specification Revenue Employees Milk reception Fixed assets
Unit
Increasing
Constant
Decreasing
Thousand R$/year Person Thousand liters/day Thousand R$
3,708 25.6 15.4 6,405
23,482 79.9 52.6 5,097
33,248 155.9 124.5 14,671
Source: Results of the research.
them have no scale problems. On the other hand, only 3.3% of the inefficient ones are at the optimum scale. Among the inefficient ones, most (76 dairies) are within the increasing returns range, whereas 42 of those are operating with decreasing returns. To have an idea of the “size” of the companies in relation to the scale of production, the data shown in Table 6 refer to the average revenue and the inputs used according to the type of return. The average revenue of the companies that are operating at optimum scale of production is R$23.5 million/year. It is observed that the companies below the optimum scale have lower revenue, whereas those above the optimum scale show higher revenue. This is an expected fact. The difference occurs in the decision for increasing the current revenue. For example, to obtain some 10% increase in revenue, the companies at optimum scale would need to increase their inputs at the same proportion. So, the average cost of the product would not be altered. For the companies with increasing returns, a 10% increase in inputs would cause the average product cost to increase by less than 10%. On the other hand, in the companies with decreasing returns, a 10% increase in inputs would cause the average product cost to increase by more than 10%. To conclude, the sample from 142 dairies in the state of Minas Gerais can be distributed, as follows: 7.04% show no problem; 7.04% show only problems concerning the incorrect scale of production; 2.82% show only problems concerning the excessive use of inputs; and 83.10% show problems concerning either the excessive use of inputs or the scale of production. In that sense, it is intended to say
March 15, 2010
212
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
that the simple quantification of the company’s inefficiency is not enough to guide it to improve its efficiency. It is necessary to identify how much inefficiency is due to the incorrect scale of production and how much can be improved if the excessive use of the inputs was eliminated.
4.2. Economic-Financial Profile of the Dairy Industries and Their Leaders, According to Efficiency Most entrepreneurs, acting directly in the management process are at the age range of below 50 years-of-age. In general, these managers have more than 10 years’ experience- in the dairy industry. Most of them have an educational background above the second degree. Among the industries composing the sample, most of them are located in urban areas. In the case of the industrial societal model, it was observed that the cooperatives are differentiated when compared against the efficiency degree, and the same occurs with the industry existence time, as 74.47% have been acting in the market for more than 20 years and they were classified as efficient ones (Table 7). When verifying the sales performance of the production cost, and the profit in the last 5 years, 41% of the inefficient companies pointed out an increasing profit performance, as well as an increasing cost of production that, in some cases, was annulled by the increasing performance of the sales. This fact generated either increasing or constant profits for 67% of those companies. However, it is observed that 33% pointed out decreasing profits (Table 8). In most cases, the profits are used to finance the company’s activities. It represents successful strategies and offers the capacity base to generate funds of resources for investments, from which the objective is to change the competitive environment in the medium and long terms. The relationship with the suppliers in the purchase of the raw material and with the customers in the sale of the final product constitutes the operational cycle of the industry. Thus, it is a medium period over which the resources are invested in the operations, without the occurrence of the corresponding cash entrances. Some part of this circulating capital is financed by suppliers for the productive process. So, the financial cycle of the industry refers to the difference between the operational cycle and the period of the production factor payments. Their direct implications are associated with the capacity to generate and allocate the resources and productive factors that subsidize the company’s activity in the short term whereas influencing the competitive capacity in the long term. It was verified that the average stockpiling in the industry occurs mainly at less than 30 days, and there is no great difference for the companies considered as efficient, a fact justified by either inventory turnover in this sector or the perishability degree of the product (Table 9). The average period granted to the customers is lower in the companies considered as efficient ones, but higher in the case of the suppliers from which 31.91%
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
213
Table 7. Descriptive Statistics of Variables Related to the Leader’s Profile and the Company’s. Variables Manager’s age
Efficient (%) Inefficient (%) 21–30 years old 31–40 years old 41–50 years old 51–60 years old More than 60 years
4.30 21.30 31.90 23.40 19.10
12.60 25.30 35.80 14.70 11.60
Educational background Postgraduate Undergraduate High school Middle school Elementary school
2.13 42.55 44.68 6.38 4.26
3.16 51.58 32.63 11.58 1.05
Experience the activity
Lower than 1 year From 1 to 5 years From 6 to 10 years Above 10 years
2.13 12.77 14.89 70.21
2.11 22.10 16.84 58.95
Societal model
Company Cooperative
23.41 76.59
64.22 35.78
Industry existence time
Until 5 years From 5, 1 to 10 years From 10, 1 to 20 years Above 20 years
4.26 6.38 14.89 74.47
17.02 21.28 28.72 32.98
Localization
Rural area Urban area
14.90 85.10
36.84 63.16
Source: Results of the research.
industries paid off their debts within 30 days. Usually, the supplier finances a considerable part of the industry’s operational cycle. Although this allows the companies to enjoy a comfortable financial situation, an increase in the operational cycle without the suppliers’ financial support can generate liquidity problems, in which the company needs to look for resources outside the operational cycle, which results in higher costs. The competitiveness among companies is a dynamic process that requires immediate reaction in elaborating the individual strategies in the short term. In the case of the dairy industries, 12.77% of the companies considered as efficient adopted the competitive price, whereas 63.83% adopt the cost of the goods more in line with the tax and profit margin. For 23.40% industries, the prices were imposed by the market (Table 10). In all those aspects, the margin obtained in commercialization of production will always depend on the existent structure of costs. Thus, the technological process,
March 15, 2010
14:44
214
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
Table 8. Percent Performance of the Cost of Production, Sales and Profit of the Companies Researched in the Last Five Years. Profit performance Condition
Cost of production
Sale performance
Decreasing (%)
Inefficient
Decreasing
Decreasing Constant Increasing Decreasing Constant Increasing
2
Decreasing Constant Increasing
Constant
Increasing
Total Efficient
Decreasing Constant Increasing
Increasing Decreasing Increasing Decreasing Constant Increasing
Total
Constant (%)
Increasing (%)
Total (%)
2 0 1
1 1 0 1 4
9 0 0 4
2 1 10 2 1 10
6 4 17
0 2 16
2 0 26
9 6 59
33
26
41
100
5
5
9 5 2 9
19 5
7
33
9 2 28 9 2 49
19
21
60
100
2
Source: Results of the research.
Table 9. Percent Relationship of the Average Periods of Stocks, Customers and Suppliers in the Dairy Industry. Average period of stocks Period in days
Inefficient (%)
In cash Lower than 30 30–60 61–90 91–120 Above 120 No response
55.79 18.95 2.11 3.16 1.05 18.95
Source: Results of the research.
Average period of customers
Average period of mercadors
Efficient (%)
Inefficient (%)
Efficient (%)
Inefficient (%)
Efficient (%)
2.13 55.32 21.28 6.38 2.13
2.11 38.95 57.89
8.51 36.17 51.06 4.26
1.05 41.05 57.89
6.38 25.53 68.09
12.77
1.05
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
215
Table 10. Industries’ Percent Relation Concerning to the Adoption Form of the Sale Price. Adoption form of the sale price Competition price Product cost Price is imposed by market
Inefficient (%)
Efficient (%)
22.11 53.68 23.16
12.77 63.83 23.40
Source: Results of the research.
the commercial relationships, the taxation, and both administrative and managerial capacity in the administration of the enterprise in the search for improving productive and economical efficiency are important factors in the analysis of its competitive pattern. The performance of business management depends on the internal decision processes. Those processes are accomplished at several hierarchical levels of the administrative structure, and depending on either the implication of the decision or the targeted results, they can be accomplished or involve the operational and strategic levels with or without participation of the external people. To understand the competitive environment and to know the directions, the sector is following its fundamental for making the right decisions. This requires an accurate knowledge about the internal processes, as it is fundamental that the leaders have better understanding of either practices or the external directions to the business environment. So, the knowledge concentrated or even individualized in the figure of the owner or manager hampers the long-run decisions depending on a more accurate analysis of the data or even on the construction of scenarios that are fundamental for the accomplishment of future action projections. Table 11 shows that the proprietor alone decides the direction of the business in 48% of companies considered as inefficient, with this occurrence being a common reality mainly in the small industries, where no specialized administrative structure exists to give support to decision-making. Although, the concentration of decision-making in the entrepreneur’s hands is necessary and effective, in many cases they become inefficient in front of the competition and impair the company’s competitiveness. However, there is a smaller group of companies considered as efficient, in which the proprietor himself takes the decision of the businesses. In this category of efficient companies, the existence of a considerable group who gathers with the main executives, uses simulation tools, consults the employees and even specialists of the area for the decisional process is emphasized. The use of those tools reinforces the use of historical data series of the sector, making possible the accomplishment of future planning with more safety, by allowing the companies to plan further ahead and react more rapidly to changes in the sector.
March 15, 2010
216
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
Table 11. Different Forms for Accomplishment of the Decision-Making Process in the Industry. Variables
Inefficient (%)
Efficient (%)
1. Proprietary himself takes the decision 2. Proprietary gathers with the main executives 3. Decision are taken after meetings with the employees 4. Use of simulation tools 5. Consultation with experts
48 16 25 4 —
33 30 32 16 4
Source: Results of the research.
Table 12.
Factors Hindering the Management Process in the Industry. Variables
Inefficient (%)
Efficient (%)
Shortage of raw materials Seasonality of raw material Pressure of the markets Disloyal concurrence of other states’ industry Consumers’ revenue Sector informality Poorly structured legislation Pecuniary trouble High interests High tax revenue High cost and taxes of the labor
13 33 23 14 5 4 1 2 2 4 1
4 23 18 15 15 6 0 2 0 4 2
Source: Results of the research.
According to Table 12, there are many factors imposing difficulties on the dairy industries. Some of those factors are internal ones or controlled by the companies, whereas others are external and depend on the medium in which the companies are inserted. Thus, the lack and periodic availability of the raw material, pressure of the supermarkets, disloyal competition of the industry located in other states, the consumer’s income, and informality in the sector were the main problems affecting the performance of the industry in the dairy sector, as pointed out by the companies. Despite being external to the company, all those factors carry heavier weight in the industries considered as inefficient ones. 5. Conclusion Regarding the organizations’ survival in the face of increasing global competition, besides being conditioned to the influence of macroeconomic factors, the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
217
management of internal decision processes is fundamentally important to obtaining positive results. Actually, organizations’ technical performance is conditioned to the socioeconomic, financial, and managerial aspects. Taking this aspect into account, 142 dairy industries in Minas Gerais State were analyzed based on their technical performance and using the data envelopment analysis. Under the presupposition of constant returns to scale, it was verified that only 10 of those dairies obtained the maximum technical efficiency, where they could not expand their revenue without the introduction of more inputs, as they were at the efficient border. However, the other dairies can still expand their revenue, as compared to those with a technical efficiency equal to 1. Although they have more employees and use more raw materials, the average revenue of the efficient dairies is much higher than the inefficient ones. In this case, it was concluded that the productivity of the factors is higher in the efficient dairies, which means that although more production factors are used, they produce much more proportionally. In the case of the fixed assets, a lower volume of the capital immobilized in the productive system were observed, compared to the inefficient ones. In the case of the inefficient dairies, it was observed that the bigger problem is not the incorrect scale of production, but the inefficiency in using the inputs, which means that there are proportionally higher waste of inputs than scale problems. Only 24 dairies show pure technical inefficiency lower than 10%. On the other hand, 100 dairies show only 10% or less inefficiency of scale. To conclude, the following results were shown by the dairy industries: 7.04% — no problems; 7.04% — only problems of inadequate scale of production; 2.82% — only problems referring to the excessive use of inputs; and 83.10% — problems referring to excessive use of inputs and inadequate scale. When analyzing the economical-financial profile of those companies, it was verified that the industry existence time and the industry-adopted societal model were important in the classification of the companies considered as efficient. Concerning the operational cycle, this fact was not found. For the efficient companies, the existence of a considerable group which emphasizes the decision-making process when searching for simulation tools and consultations with specialists in the area, therefore allows for more reliable planning as well as to be ahead and to answer more quickly to the requirements of the sector. It is important to present as limitation of the study that DEA is a relative approach. Therefore, their conclusions are just applied in the analyzed companies. Therefore, we recommend application of that study in other sectors and other countries.
Acknowledgement We also thank the Funda¸ca˜ o de Amparo a` Pesquisa de Minas Gerais (FAPEMIG) for their financial support for this research.
March 15, 2010
218
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
References Banker, RD, H Charnes and WW Cooper (1984). Some models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science, 30(9), 1078–1092. Charnes, A, WW Cooper and E Rhodes (1978). Measuring the efficiency of decision-making units. European Journal of Operational Research, V. 2. pp. 429–444. Coelli, TJ, P Rao and GE Battese (2005). An Introduction to Efficiency and Productivity Analysis, 2nd Edn., p. 349. New York: Springer. Confedera¸ca˜ o Nacional da Agricultura (Bras´ılia, DF) (2003). Valor bruto da produ¸ca˜ o agropecu´aria brasileira: 2003. Indicadores Rurais, Bras´ılia, 7(50), 6. Cooper, WW, LM Seiford and J Zhu (2004). Handbook on Data Envelopment Analysis, p. 592. Norwell, MA: Kluwer Academic Publishers. Cooper, WW, LM Seiford and K Tone (2007). Data Envelopment Analysis: A Comprehensive Text with Models, Applications, References and DEA-Solver Software, 2nd Edn., p. 490. New York: Springer. Embrapa Gado de Leite (2007). Banco de dados econˆomicos. In: http://www. cnpgl.embrapa.br. Federa¸ca˜ o da Agricultura e Pecu´aria do Estado de Minas Gerais (FAEMG) (2007). Indicadores do agroneg´ocio. In: http://www.faemg.org.br. Farrel, MJ (1957). The measurement of productive efficiency. Journal of the Royal Statistic Society, V. 120, pp. 253–290. Governo do Estado de Minas Gerais (2007). Maio de 2007. In: www.mg.gov.br. Instituto de Desenvolvimento Integrado de Minas Gerais (INDI) (2007). A ind´ustria de latic´ınios brasileira e mineira em n´umeros. In: www.indi.mg.gov.br. Instituto Brasileiro de Geografia e Estatistica (IBGE) (2005). Pesquisa Trimestral do Leite. In: http://www.ibge.gov.br. Ray, SC (2004). Data Envelopment Analysis: Theory and Techniques for Economics and Operations Research, p. 353. Cambridge University Press. Secretaria de Estado de Agricultura, Pecu´aria e Abastecimento (SEAPA MG) (2007). Maio de 2007. In: www.agricultura.mg.gov.br.
Biographical Notes Luiz Antônio Abrantes, Doctor of Administration is a Professor in the Department of Administration at the Federal University of Viçosa (UFV). His research interests include management and public policies, corporate finance, accounting and controlling, and tax management of production chains. Adriano Provezano Gomes, Doctor in Applied Economics is a Professor in the Department of Economics at the Federal University of Viçosa (UFV). His research interests include quantitative methods in economics, efficiency analysis models, public policies, consumer economics and agricultural economics. Marco Aurélio Marques Ferreira, Doctor in Applied Economics is a Professor in the Department of Administration at the Federal University of Viçosa (UFV). His
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
219
research interests include public administration and social management, finance, efficiency and performance, quantitative issues. Antônio Carlos Brunozi Junior has a Master’s in Administration from the Federal University of Viçosa (UFV). His academic manuscript has been concentrated on the following areas: accounting and finance, public administration and accounting, and public policies related to tax management on agro-industrial chains. Maisa Pereira Silva is a Student in Administration at the Federal University of Viçosa (UFV). His academic manuscript has been concentrated on the following areas: public administration, finance, public policies and finance.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Chapter 10
A Neurocybernetic Theory of Social Management Systems MASUDUL ALAM CHOUDHURY Professor of Economics and Finance, College of Commerce and Economics, Sultan Qaboos University, Muscat, Sultanate of Oman & International Chair, Postgraduate Program in Islamic Economics and Finance, Trisakti University, Jakarta, Indonesia
[email protected]
Neurocybernetics in management theory is a new concept of learning decision systems based on the episteme of unity of knowledge. Such an episteme must be unique and universal so as to be appealing to the global community. Neo-liberalism, which is the core of present perspectives in management theory, cannot offer such a new epistemic future. That is because of the inherently competitive nature of methodological individualism that grounds received management and decision-making theory. On the contrary, the episteme of unity of knowledge on which a new and universal perspective of management and decision-making theory can be established remains foreign to the liberal paradigm. Neurocybernetic theory of management is thus a theory of learning and unifying types of decision-making systems. It is studied here with reference to the case of community-business unitary relations and the family. The social neurocybernetic implications are examined for these two cases in the light of neo-liberalism and Islam according to their contrasting perspectives of the nature of the world of self and other. Out of these specific studies, the chapter derives a generalized theory of neurocybernetic of social management encompassing the wider field of endogenous morality, ethics, and values within the unified process-oriented methodology of a new episteme of science and society. Keywords: Social cybernetics; system theory; management decision making; Islam and neo-liberalism.
1. Introduction The principal objective of this chapter is to introduce a new idea of a system and cybernetic theory of decision making. Because the epistemology of such a management theory is premised on unity of knowledge, learning and systemic unification are inherent in the theory. We will therefore refer to such a learning and unifying theory of management decision-making system as the neurocybernetic theory of management systems. “Neurocybernetic” is meant in this chapter to convey the idea 221
March 15, 2010
222
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
of how the mind and learning construct a decision-making system. Such decision making is governed by preferences that are organized under a management system. Hence, a neurocybernetic theory of management is an epistemological way of understanding decision making in organizational behavior. The example of Islamic perspective of decision making in management systems will be focused on, with especial attention to Islamic finance and economics. 2. Background A perspective of a system theory of organizational behavior arises from management theory. Management theory deals with the method and art of organizational governance. It need not be driven by a commitment to abide by a given epistemology of the background organization theory. Also, in diverse social systems different management theories or methods of organizational governance can abide. One can think of some extreme cases. 2.1. Max Weber on Management Theory and the Problem of Liberalism Weber’s criticism of modern development in organization theory qua management methods was to herald the coming age of individualism in which capitalism, bureaucracy, and rationalization of the governance method would prevail. This would kill the values of the individual in which he self-actualizes with the collective. Mommsen (1989, p. 111) writes on Weber’s concern with the future development of bureaucracy and rationalization in organization theory enforced by the power of management methods. Weber fears that this gaining hegemony would petrify the liberal idea: “. . . Weber was all too aware of the fact that bureaucratization and rationalization were about to undermine the liberal society of his own age. They were working towards the destruction of the very social premises on which individualist conduct was dependent. They heralded a new, bureaucratized, and collectivist society in which the individual was reduced to utter powerlessness.” Weber was thus caught between the pure individualism of liberal making and collectivism led by self-seeking individualism formed into governance (Minogue, 1963). Weber feared that this would destroy the fabric of individualism on which liberalism was erected. 2.2. Global Governance: International Development Organizations Today, the International Monetary Fund (IMF) and its sister organizations such as the World Trade Organization (WTO) take up a global governance view based on Weber’s kind of opposed liberal perspectives. First, the IMF (1995) promotes global ethics under the guise of rational choices and human consensus between nations. On the other hand, the IMF and its various sister organizations impose conditions for stand-by funds to developing countries. In doing so, the Bretton Woods Institutions maintain strict adherence to macroeconomic policies and designs. The IMF
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
223
policies and conditionalities together with the World Bank’s structural adjustment as development management practices have brought about failed futures for many countries (Singer and Ansari, 1988). The management perspectives prevailing in international development organizations are a prototype of the preferences of self-interest and methodological individualism transported to organizational behavior and enforced by global governance as an extreme form of global management. One can refer to the public choice theoretic nature of such organizational preferences and management behavior explained by Ansari (1986). Many examples of this kind of international control and governance within global capitalism are prevalent in institutions such as the WTO, Basle II, and the regional development organizations. The latter are forced to pursue the same directions as the international development financing institutions by design and interest. The transnational corporations too become engines for managing capitalist globalization in this order (Sklair, 2002). 3. Management Theory in the Literature Jackson (1993) sees management systems as a designing of social reality as perceived by the principal-agent game within an organization rather than being led by any kind of epistemological premise. Consequently, social reality is constructed in management systems theory as a perception of the agents. This allows for the contest of individual wills served by those who mold these preferences in organizations. The institutional mediums either propose or enforce the preferences of methodological individualism in society at large. In the neurocybernetic concept of organizational management theory that Jackson proposes, it can be inferred that such a systems perspective of governance serves only to deepen the methodological individualism and competition and contest of wills that ensue from management practices. Management systems theory is thereby not necessarily premised on a learning behavior with unity of knowledge for attaining a common goal of mutually perceived social reality. Yet, a learning practice in unitary management systems remains a possibility. In global political economy, the North has established an effective arsenal of cooperative development-financing institutions, but to the detriment of the well-being the poor South. This is most pronounced on matters of collective military pacts, belligerence, and institutional and technological monopoly of the North on the South. Likewise, an integrative management of governance system can be enforced by the dominant force. The abuse of the United Nation’s authority by the United States, Britain and their alliance on matters of war and peace proved to be true in the case of the invasion of Iraq in the second Gulf War. A coercive system is one that is purely of the individualistic type. Many transnational corporation management practices in capitalist globalization and the political management of war by force can be categorized as coercive systems. Coercive management systems are the principal ones in today’s global governance. It is also this
March 15, 2010
224
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
kind of efficient governance by force that was presented as a model of dominance and national control by Machiavelli (1966). Cummings (2006) gives an incisive coverage of this kind of hegemonic management of global governance in many areas of capitalist globalization in present times. Other forms of management systems pointed out by Jackson are the pluralistic and unitary types. Pluralistic management is a principal-agent game in which the interest of stakeholders is attained by consensus, despite the existence of diverse and opposing views on the issue under discourse, but being guided by the willingness of participants to coordinate and cooperate. An example of this case is industrial democracy, where management and workers can arrive at consensus on management issues despite their opposing views on particular issues. Even in the pluralistic model of management for governance within the Bretton Woods Institutions, self-interest and power-centric approaches are entrenched in the hands of the industrialized nations over the developing ones. The unitary model of management systems is based on a pre-existing agreement among participants on assigned goals and rules in institutional discourse over issues. The abidance by liberalism as the foundation Western democracy’s cultural make up, and its social reality, is an example that prevails over the entire mindset, guidance, and enforcement on issues under discourse in the Western institutional domain. Yet, the same unitary management system is not necessarily epistemologically sensitive to other cultural domains and social realities. The biggest conflict today is the divide between the understanding of neoliberalism as the Western belief and the Islamic Law among the Islamicists. The much-needed bridging and dialogue between the divided worlds will continue as the most significant socio-scientific issue for all peoples for all times (Sardar, 1988).
4. An Example of Management of Complementary Community-Business Relations Figure 1 explains the interconnected dynamics between business and community along with its social and commercial extensions. All these are understood in the framework of pervasively complementary networking according to the neurocybernetic model of social management. Participation in productive social transformation of the community and business, and thus the unification by learning between them, can be measured by the choice of cooperative development-financing instruments. In the Islamic framework of reference such development-financing instruments are interest-free ones, such as, profit and loss-sharing (Mudarabah), equity participation (Musharakah), costpush pricing in project valuation (Murabaha), trade financing, rental and deferred payments (Bay Muajjal), loans without interest charge (Qard Hassanah), joint ventures, co-financing, etc. The respective shares of total investment resources as mobilized by these instruments give their quantitative measures.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
225
Recall The learning process Interactive, Integrative, Evolutionary (IIE) Epistemology
(Tawhid)
Ontology1
Sunnah Ontology and Knowledge Formation Referencing The episteme With self in The midst of Islamic discursive medium (diverse kinds)
Fundamental epistemology is recalled over repetitive learning processes in continuum
Ontology 2-Ontic
Knowledgeinduced relations, Cognition, Entities (Worldsystems: diverse kinds)
Evaluation
Continuity
Social New wellbeing learning in the processes presence Co-evolve of perpetuity Circular (continuum) causation relations for evaluating degree of unity of knowledge attained by pervasive complementarities (Qur’anic pairing)
Shari’ah rules Communitydeveloped by business epistemological participation reference and extended ontological causation -ontic formalized* discourse (Qur’anic Shura)
Communitybusiness extended participation by complementarities evaluated
This process type is repeated in continuity and across continuums by recalling the fundamental episteme at every emergent event along IIE. ∗ The
interconnecting variables in community-business embedded system relations are: income, employment, resources, participation, poverty alleviation and sustainability, output, profitability, share capital, number of shareholders and stakeholders (participation), financial resources, productive factors (capital, labor, and technology) and participatory instruments. More variables are added with the advance of the IIE processes on specific problems under examination. Figure 1. A system model of circular causation interrelationships: The Islamic social management organization.
The surest and logical way of avoiding interest in all forms of productive activities compliant with the Islamic Law (Shari’ah) is to turn the economy into a participatory form, both by means of financial instruments of economic and financial cooperation and by understanding and formalizing the principle of pervasive complementarities in the socio-scientific world between the variables, instruments and agencies in action.
March 15, 2010
226
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
In this context, the complementarities between the real economy, which represents the productive transformation of the Shari’ah compliant enterprises and the financial economy, is the medium for fully mobilizing money into the real economy via the medium of cooperative forms of financing instruments. The return on such a money-real economy interrelationship in the good things of life is the rate-of-return and the profit-sharing rates. Consequently, the holding of money in speculative and savings outlets that accrue interest as a reward for holding money is replaced by the returns on productive investment in the real economy. Such productive transformations and the underlying cooperative development-financing instruments are realized by the fullest mobilization of financial resources acting as currency in circulation. At the community level, the money-real economy circular linkages generate development sustainability. In the special case of the agricultural sector, which is the life-blood of community enterprise and sustainability, it is further represented by the maintenance of resources in agricultural lands with their due linkages with agro-based industries and service outlets, and also with the monetary and financial sectors. Thus linkages between agriculture, agro-based industries, service outlets, and money and finance establish a dynamic basic-needs regime of development. Sustainability can be represented by maintenance of resources in agricultural lands with their due linkages with agro-based industries and service outlets. Industrialization of the agricultural sector must be avoided. Thus linkages between agriculture, agro-based industries, and service outlets establish a dynamic basic-needs regime of development. The same kinds of “participatory instruments” as mentioned for community development, will exist for businesses. See footnote of Fig. 1. Profit-shares serving both business and community shareholders and stakeholders represent profitability of projects and investments. Share capital denotes community resources. The number of shareholders and stakeholders represents community members who are participants in community-business ventures. Participation in the sense of shareholding and stakeholding in community-business cooperative ventures is therefore a socio-economic variable that is common to both community and business. Figure 2 brings out these interactions in the extended sense. 5. Social Well-being Criterion in Community-Business Interrelationships The circular causations between community and business socio-economic variables and policy variables and development financing instruments with dynamic preference transformation are realized by organic learning in IIE processes. Thereby, social well-being functions as the objective of evaluating the degree of unification of knowledge for community-business interrelationships and is estimated by simulation in reference to the degree of unity of knowledge attained between
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems E: Economywide, including expanded communities, businesses, markets and Government
Shareholding, stakeholding Consumer B: Business satisfaction I C: Product Community preference Returns and profitability of venture D: Islamic banks and Islamic insurance Economic and financial stability Community Real sector and financial sector complementarities Demography Resource mobilization product diversification and risk Human diversification resource development along Interactive, integrative and evolutionary learning by the Shari’ah episteme of unity of knowledge between C, B, D, E. lines Evaluation of social wellbeing by means of simulating Participation the degree of complementarities gained between the and selected variables by heightened ethical consciousness. discourse
IIE Feedback caused by continuous learning in the conscious participatory experience between C, B, D, E.
Consciousness
227
Production, organizational and marketing menus share capital financing instruments Project selection Shareholding and stakeholding with other interbusinesses Profitability and returns Socioeconomic variables and relations
IIE Feedback into Business in the consciousness experience between C, B, D, E.
Economy wide expansion and global ethics, markets, trade, sustainability and development
Figure 2. The interactive, integrative and evolutionary process (IIE) of unity of knowledge between community, business, Islamic financial institutions and the economy.
the two systems in terms of their selected variables. The simulation processes occur across continuously evolving IIE processes. Consequently, the selected socioeconomic variables establish circular causation interrelations signifying the presence or absence of complementarities between them. Corrections to the circular causation relations measured by the coefficients of the variables in the causal relations are corrected to attain better levels of complementarities. Such simulated corrections explain the dynamic process of community-business interrelationships.
March 15, 2010
228
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
Figure 2 provides a schema on how interrelatedness and risk and product diversifications can be realized by the participation of all parties concerned and by complementarities between their activities. This schema presents the nature of the neurocybernetic system model of social management. 5.1. Another Example of Social Management: Integrated Decision Making in the Extended Human Family A family is a collection of individuals bound together by blood relations, values, and fealty. Thus, they pursue some common well-being objectives through patterns of decision making that interconnect individual members with the head of the family and extended families in the intergenerational sense. The relationship is circularly causal and thus strongly interactive. The values inculcated within the family are interdependent with the social structure by multiple interrelations. Decision-making within the family on various issues in concert with socio-economic matters involves allocation of time according to the distribution of tasks by the members. Through the circular causation relations between the complementary parts or its breakdown in contrary familial systems, a sense of management decision making within the family can be construed. The underlying dynamics are similar to the case of community-business relationship. Consequently, a family undertakes organizational management behavior similar to the community-business decision-making entities. Such organizational behavior has its extensive implications in the community, markets, and the ethics of social behavior. Thus, a neurocybernetic theory of social management decision-making can also be extended through the family as an organizing social miniscule to higher echelons of social decision-making. A neurocybernetic system is thus generated. 6. The Neo-Classical Economic Theory of the Household and Its Social Impact: A Critique In the light of the above definition, neo-classical economic theory treats the individual in relation to the family in terms of utilitarian motives. Three cases can be examined here to make the general observation on the nature of familial relationship, preference formation, and the well-being criterion in neo-classical economic perspectives. 1. Each individual in the family is seen as an individual with rights, freedoms, and privileges of his or her own. This case can be seen in children and parents who each seek their own individual wellbeing out of secured rights within and outside of home. Children exercise their rights to decide individually to remain independent of parents after the dependency age. The same attitude can be found in the common-law family by virtue of an absence of legal rights binding any side for a mutual sharing of economic benefits. Such a picture of individualist attitudes
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
229
and values that transcends individual behavior to the social structure is referred to as methodological individualism (Brennan and Buchanan, 2000). We formalize the above characteristics for the individual and family in neoclassical economic context. Let the ith individual preference map used to preorder a set of rational choices be denoted by ≥i , i = 1, 2, . . . , n. Consider the three choices, A (decision not to bear children followed by increased labor force participation), B (decision in favor of both childbearing and work participation), and C (child bearing and homemaking). Individuals in a family governed by methodological individualism will likely preorder preferences as A ≥i B ≥i C. The collective preferences of the family governed by methodological individualism are first spread over socio-economic states and secondly over the number of individuals i: ∪i ∪states {≥i [A, B, C]} = ∪i [≥i (A)+ ≥i (B)+ ≥i (C)],
i = 1, 2, . . . , n. (1)
If for a larger number of individuals i, state A dominates in the preordering as shown, then ≥i (B) and ≥i (C) become decreasingly relevant preferences (irrelevant preference in the limit (Arrow, 1951). Now ≥i (A) dominates. Consequently, social preference (≥) arising from the household is reflected in ∪i [≥i (A)] =≥ (A) say, now independent of i due to the dominance of this preference. Next, apply ith individual utility index in hth household, Uih to ≥ to yield the above form of aggregation leading to the household utility function, Uh : Uh = i Uih (≥ (A)),
with Uih (≥ (A)) > Uih (≥ (B)) > Uih (≥ (C)),
over the three states, A, B, C. Hence, the utility maximization objective of neo-classical household utility function rests simply on Uih (≥ (A)). From this level, the social welfare function in which the family is a social miniscule, is given by U(A): U(A) = h Uh = h i Uih (≥ (A)).
(2)
Corresponding to rational choice, A causes continuous substitution of the variables characterizing A over those characterizing B and C for individuals, households and society, since preferences are now replicated in additive fashion. 2. In the second case, we consider the possibility of distributed choices between A, B, C. Now, the household members’ behavior as described above results in the social utility function of the type shown in Eq. (3). The formal steps towards establishing it have been skipped, but the implications are important to note. U(A) = h Uh = h i Uih (≥ (A), ≥ (B), ≥ (C)).
(3)
Its own bundle of goods and services that serve individual needs within a household determines each of the states A, B, and C. For instance, A can be characterized by work participation, B by daycare, and C by home-cared goods. These goods exist
March 15, 2010
230
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
as substitutes of each other either taken individually or in groups. For instance, A can combine with B in the form of cost-effective daycare. The bundles of goods work in participation with daycare in the choice (A, B), and thereby substitute C. Socially, this choice is made to reflect the needs of A and B and to formulate both market goods and institutional policies that promote A over B over C, or (A, B) over C, as the case may be. 3. Resource allocation over the alternatives A, B, and C requires time and income. The allocation of income and time over such activities forms the budget constraint for utility maximization in the above two cases. We formalize such resource allocation as follows: Let total household time be allocated to leisure (childbearing, c) and works (productive activity, w). The cost for acquiring c is Cc ; the cost for acquiring w is Cw . T is given by the following expression: T = tc + tw Income constraint is, I = tc · Cc + tw · Cw The household utility maximization problem is now stated as: Max Uh (tc , tw ) = i Uih (A, B, C)
(4)
Subject to, I = tc · Cc + tw · Cw = i (tci · Cci + twi · Cwi ) T = tc + tw = i (tci + twi ). We have now two versions of the above household maximization problem. They together have important underlying implications. First, we note that household and social preferences are social replicas of individual preferences, values, and attitudes toward the family. The individual utility indexes, and thereby, the household utility function and the social welfare function, are each based on competing attitudes towards goods distributed among substitutes A, B, and C in the sense mentioned earlier. Second, preferences are uniformly competing, preordered, and individualistic in type. 4. In Eq. (4), the household utility function and the social welfare function convey all the utilitarian constructs given by Becker (1981) as follows. (a) Household utility function is based on marginal substitution between children as leisure and market goods. The utility function is of the form as in Eq. (1). (b) The number of children and quality of children experience a tradeoff in the utility function with quality included. (c) Children’s utility and the consumption of parents are substitutes either as expressed by the utility forms in Eq. (1) or (3). (d) In the utility function of the head of the family with multiple children’s goods, the head of the family needs more income to augment a gift to children and wife in such a way that there is compensation between other members so as to
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
231
keep a sense of fairness in the income distribution between members and also spending in himself. The utility function is of the form as in Eq. (4) with the addition of the cost of gift. Time allocation in generating income for gifts is usually added to or treated similar to tw . The assumption of marginal rates of substitutions between goods for children is that cheating children increase the cost of the head of family by the amount of additional income required for gifts. (e) The utility function in Eq. (4) can be taken up separately for husband and wife to explain Becker’s theory of marriage and divorce (Becker, 1974, 1989). If the gift (dowry) given by the wife to the husband is deducted from the wife’s income in marriage and the net income of the wife while married exceeds the family income if divorced, the decision of the wife is to remain in marriage. The same argument is extended to the husband’s side. In all cases, we find that the specific nature of methodological individualism, preordered preferences, competition, and marginal substitution property of every utility function causes a hedonistic household and a society of individuals as cold calculators. The laterally and independently aggregated preferences of household members are continued intergenerationally to form an extension of the above formalization to this latter case. The socio-economic character premised on the intergenerational family preferences acts as catalysis of its continuity. The postulate of preordering of preferences as datum in decision-making leaves the system in a dissociated form of collective individualism. This is a social organism contrary to management decision-making. Besides, the linearity of the system breaks down the richly complex system of social decision making into dissociated parts. A linear mind of this kind cannot answer the richly complex nature of problems encountered in the social organism with the family as a neurocybernetic system. 7. Preference Behavior of the Islamic Household and Its Socio-Economic Impact In contrast, the Islamic way of life, attitude, motivation, and thus preferences are centrally guided by the principle of unity of knowledge. That is, in this system, knowledge is derived from the divine text that forms, guides, and sustains behavior. The guidance takes the form of mobilizing certain instruments as recommended by the Islamic Law (Shari’ah), which establish unity of knowledge as a participatory and cooperative conduct of decision making at all levels. The instruments used assist in such participatory and cooperative decision making while they phase out the instruments of self-interest, individualism, competition, and methodological independence between the partners. The family as a social unit now becomes a strong source for the realization of the participatory decision-making emanating from knowledge-induced preference formation. As in every other area of human involvement, the family forms its preferences by interaction leading to consensus involving family members. The result of interaction is consensus (integration) based on discourse and participation within
March 15, 2010
232
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
and across members and the socio-economic order. Such an interactively developed integration is the idea of systemic meaning of unity of knowledge. It conveys the idea of neurocybernetic decision making in a social management organization. Third, the enhancement of knowledge by interaction and integration is followed by evolutionary knowledge. The three phases of IIE continue over processes of knowledge formation. The socio-economic order caused and sustained by such a systemic IIE dynamics responds in a similar way. We refer to such a discursive process as being premised on unity of knowledge. Its epistemology and application by appropriate instruments are premised on the law of oneness of God (or unity of divine knowledge). An example of the Islamic familial attitude is respect between young and old, between husband and wife (wives) intergenerationally speaking. In this system, the Qur’an says that men and women are co-operators with each other and children form a social bond ordained by God (Qur’an 7:189–90). Within this familial relationship there exists the spirit of discourse and understanding enabling effective decision making to continue. The participatory experience is realized in such a case through the Islamic medium of participation and consultation called the Shura. The Shuratic process of decision making is methodologically identical with the IIE process. It is of the nature of management dynamics dealt with in the case of community-business circular causation interrelationships. The IIE process being a nexus of co-evolutionary movements in unity of knowledge, it spans over space (socio-economic order) and time (intergenerational). In the socio-economic extension, the IIE process applies to matters of individual preferences, freedom of choice, participatory production environment, appropriateness of work participation, distribution of wealth, caring for orphans, trusts, inheritance, contractual obligations, marriage, divorce, social consequences of goodness and unethical conduct. The socio-economic variables are thus activated by the induction of the moral and ethical values premised on unity of knowledge as the relational epistemology and realized by appropriate participatory instruments that enable social co-determination, voluntary conduct, attitudes, contracts, and obligations. Both the episteme and the instruments of application of unity of knowledge emanate from the law of divine oneness, now understood in the systems sense of complementarities and participation caused by circular causation interrelations. 7.1. Preference Formation in the Islamic Family Let {≥j,k,h }i = {≥j,h ∩ ≥k,h }i denote the interactive preferences of the jth individual (k-individual) in the hth household, j, k = 1, 2, . . . , n; h = 1, 2, . . . , m; i denotes the number of intermember interaction on given issues. Let the jth (kth) preference be of a specific (hth) head-of-the family. The household preference, ≥h = limi [{∪j,k ≥j,k,h }i ] = lim i [{∪j,k {≥j,h ∩ ≥k,h }i ] is the mathematical union of the above individual preference map over (j, k) for i-interaction.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
233
The social preference ≥ equals aggregation of h-household preferences: ≥ = ∪h ≥h = ∪h limi [∪j,k (≥j,k,h )}i ] = ∪h limi [{∪j,k {≥j,h ∩ ≥k,h }i ]. This expression shows that social preferences are formed by interaction (shown by ∪j,k ) and integration (shown by limi {∩j,k (·)) in given rounds of family discourse (i) on issues of common interest. Because interaction leading to integration causes the formation of knowledge in the Islamic family, we will denote such a knowledge formation by θhi , for hth household and i number of interaction. i take up increasing sequential numbers as interaction and integration proceed into evolutionary phases of discourse. The limiting value of knowledge-flows over a given process of IIE may be denoted by θhi∗ . The limiting social value of knowledge-flows in terms of interaction over many goods and services that are shared in the market and ethically determined by IIEtype preference formation across households is denoted by θ i∗ . Sztompka (1991) refers to such an evolutionary social experience as social becoming. As i increases (numbered processes), a case typically encountered when more members of the extended family are involved in a household decision making, an evolutionary phenomenon is experienced. This completes the IIE pattern over many processes. This is the intergenerational implication of extension of the IIE processes over space (socio-economics) and time (intergenerational). 8. A Social Management Model of the Family as a Social Miniscule The family as a social unit is now defined by the collection of all households deciding in the IIE process over given socio-economic issues. Let such socio-economic issues for the hth household with 1, 2, . . . members and given head of the family be denoted by, xih = {x1h , x2h , . . . , xjh , . . .}i . Let chi = {c1h , c2h , . . . , cjh , . . .}i denote the unit cost of acquiring xhi . The influence of interaction is denoted by the presence of “i”. Thereby, the household spending on acquiring its bundle of goods is given by chi, · xhi = k (xkh · ckh )i = Spih , where, Spih denotes the total spending of k-members of hth household over a given series of interaction i. Spending on the good things of life is highly encouraged in the Qur’an as opposed to saving and hoarding as withdrawal from the social economy. The family member’s j, k interactive attainment of wellbeing in h-household is given by: i (θhi∗ , Spikh (θhi∗ ))[limi [{∪j,k {≥j,h ∩ ≥k,h }i ] Wjk
The bracketed term [·] throughout the chapter means the implied induction of this constituent term on all the variables, relations, and functions. The management simulation problem for the interacting (j, k)-individual over i-interaction for a given h-household is given by: i (θhi∗ , Spikh (θhi∗ ))[limi [{∪j,k {≥j,h ∩ ≥k,h }i ] Simulate{θhi∗} Wjk
(5)
March 15, 2010
14:44
234
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
Subject to, i )[limi [{∪j,k {≥j,h ∩ ≥k,h }i ], θhi∗ = f1 (θhi∗ , Spikh ; Wjk
“-” denoting one-process lag in the IIE processes, given a simulated value of in any ith process. i )[lim [{∪ {≥ i Spikh = f2 (θhi∗ ; Wjk i j,k j,h ∩ ≥k,h } ] is the spending of kth individual in hth household. After taking the union of all relations concerning k-individuals in h-household management, simulation of the total h-household members’ wellbeing function is given by: i Wjk
Simulate{θhi∗} Whi (θhi∗ , Spih (θhi∗ ))[≥h ]
(6)
Subject to, θhi∗ = f1 (θhi∗ , Spih ; Whi )[≥h ] Spih = f2 (θhi∗ ; Whi )[≥h ] Clearly, Eq. (6) is derived by union of every part of Eq. (5) over all h-household individuals. By a further mathematical union of every part of Eq. (6) over all households, we obtain the simulation problem of the social well-being function in this collective social organism, Simulate{θi∗} W i (θ i∗ , Spi (θ i∗ ))[≥]
(7)
Subject to, θ i∗ = f1 (θ i∗ , Spi ; W i )[≥] Spi = f2 (θ i∗ ; W i )[≥] Since θ values are central to the simulation problem, learning is extended over space and time to embrace such intergenerational knowledge-flows and the corresponding knowledge-induced variables. The ethical and moral preferences of intergenerational members of the family thus remain intact in order to sustain the effectiveness of the IIE process in concert with the intergenerational family and the socio-economic order. The richly complex nature of flows across consensual decision making across the family nexus reflects the neurocybernetic of family management decision making behavior in the sense of the IIE process. 9. Refinements in the Household Social Wellbeing Function In the participatory decision making, the role of the revered and learned principal is central. The head is expected to be a person endowed with knowledge and integrity in the Islamic Law. The guidance of the head in decision making is respected. It is instrumental in guiding discourse and decision making among members. The Islamic family members are required to respect but not to follow the injunctions of
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
235
the head of the family in case such a head-of-the family decision is contrary to the Islamic Law. Given the head (H) of the family’s well-being function, WHNi , as a reference for household decision making, the new simulation problem derived in the manner of Eq. (7) takes the form: Simulate{θhi∗} SW(·) = Whi (θhi∗ , Spih (θhi∗ ))[≥h ] + λ(θhi∗ ) · ({WHNi ∗ − Whi (θhi∗ , Spih (θhi∗ ))[≥h ]} = (1 − λ(θhi∗ )) · (Whi (θhi∗ , Spih (θhi∗ ))[≥h ] + λ(θhi∗ ) · (WHNi ∗ )
(8)
Subject to, θhi∗ = f1 (θhi∗ , Spih ; Whi )[≥h ] Spih = f2 (θhi∗ ; Whi )[≥h ] WHNi ∗ is an assigned level of the head’s perception of the wellbeing function for the family. It assumes a form explicit or implicit through the household IIE process after Ni rounds of discourse. Thus, ({WHNi ∗ Whi (θhi∗ , Spih (θhi∗ ))[≥h ] is an adaptive constraint. λ(θhi∗ ) in 0 < λ(θhi∗ ) < 1 explains simulative knowledge-induced shifts in the wellbeing index as an attribute of knowledge-induction over IIE processes. Evaluation of the resultant social wellbeing function determines the ethical transformation of the socio-economic order caused by the Islamic choices of goods and services at the household level. The result can be generalized across generations. An extended neurocybernetic perspective has thus been conveyed to a managerial kind of guided decision making in the extended family in terms of its widening organic relations. Simple manipulation of Eq. (8) yields dSW/dθhi∗ > 0 with all the terms resulting from the differentiation being positive. The magnitude of the positive sign will be determined by the sign of [WHNi ∗ (..) − Whi∗ (..)]. If this term is positive, the positive value of dSW/dθhi∗ will be higher than the positive value of the same if the term is negative. This means that the effective guidance, governance, and caring attitude of the head of the intergenerational family over the members are pre-conditions for the well-being of the family. In turn, such attitudes of all households determine the increased level of social well-being. We note that an increase in WHNi ∗ (..) due to a gain of knowledge derived from organic complementarities within and across family decision making in concert with the socio-economic order must remain higher than the similar gain in Whi∗ (..). This marks continuity of the patriarchal family and the caring function of the principal. The conviction on the positive role of spending in the good things of life on social wellbeing is a basis of motivation of the principal on family members’ well-being.
March 15, 2010
236
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
Sustainability of such a family-socio-economic response needs to be continued intergenerationally. 9.1. Important Properties of the Simulation Models of Individual-Family-Market Relations: Widening Social Management Neurocybernetic We note a few important properties of the above simulation systems. First, continuous sequencing of IIE-phases explains the dynamic creation of knowledge. Second, the creative evolution of knowledge-flows is determined by behavioral aspects of the model as explained by IIE-type preference formation. Third, the IIE-nature of individual preferences transmits the same characteristics to the socioeconomic variables through household preferences. Hence, the household is seen to be a richly endowed complex social unit. Fourth, the aggregation of the social well-being index from the individual and household levels to the social level is nonlinear, conveying the neurocybernetic feature. Likewise, the simulation constraints are nonlinear. That is because of the continuous knowledge-induction caused by complementarities between diverse variables. Besides, the functional coefficients are knowledge-induced causing shifts in the wellbeing function and the constraints over learning across IIE processes. 9.2. Inferences from the Contrasting Paradigms The neo-classical and Islamic socio-economics of the family give contrasting paradigms and behavioral results. Neo-classical economics is built on hedonic preferences. Methodological individualism starts from the behavioral premise that is essentially formed in the household. The variety of households in neo-classical socio-economics manifests intensifying individualistic views by the very epistemological basis of economic and social reasoning. Consequently, the family as a social miniscule also transmits the same nature of preferences and individualism to the socio-economic order. The meaning of interaction is mentioned without a substantive content in preference formation. A substantive methodology of participatory process in decision making is absent in neo-classical economics. Thereby, individualism, linearity, and failure in organic learning in the family system make the neo-classical family devoid of the richly complex management nature of social neurocybernetics. 9.2.1. The neo-classical case The institutional and policy implications of the above behavioral consequences of the neo-classical family on society are many. Individual rights are principally protected over the rights of the family as an organism. An example is the right of the 18 year-old to date partners over the right of the family to stop him/her from
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
237
doing so. Legal tenets are drawn up to protect the individual rights in this case. In the market venue, the dating clubs and databanks flourish to induce the activity of teen dating. Likewise, such markets that support effective dating activities polish the individual’s sensual preferences. These kinds of goods result in segmentation between ethical goods desired by conservatism and individual preferences. Thus, individual preferences are extended socially. 9.2.2. The Islamic case The IIE-process nature of decision making in the Islamic family relegates individual rights based on self-interest to family guidance against unethical issues. In all ethical issues, the collective will of the members guides and molds the preferences of the individual members according to the Shari’ah rules. Such rules are inspired within the family discourse by the principal. Individual preferences on dating are replaced by early marriage, which is recommended in Islam. Marriage becomes a moral and social relationship on legal, economic and political grounds, and thus is a unifying social force. Consequently, goods and services as common benefits replace competing markets. The legal tenets of the Shari’ah prohibit unethical and immoral goods to be consumed, produced, exchanged, and traded. Ethical consequentialism of the market place is good for all (Sen, 1985). Hence, such goods mobilize the spending power of the household in the economy through individuals who are established in the unified family environment for realizing the greatest degree of economic growth, productivity, stability, and prosperity. Unethical markets are costly because of their price-discriminating behavior in differentiated markets. Market segmentation is thus deepened. 10. The Head of the Family and Islamic Intergenerational (Grandfathers and Grandchildren) Preference Effects on Household Well-being and Its Socio-Economic Effects Intergenerational generalization of familial decision making along the IIE-process model is tied to the intergenerational extension of {θ, x(θ)} values. Note that time in the intertemporal framework now enters the analysis merely as datum to record the nature of co-evolution of {θ, x(θ)} values. The substantive effect on unitary decision making is caused by knowledge-flows toward attaining simulated values of W(θ, x(θ)). In the intergenerational nexus of the IIE-process methodology, W(θ, x(θ)) acts as a measure to evaluate the attained levels of unity of knowledge spatially (family and socio-economics) and intergenerationally as well. In other words, in the intergenerational familial decision-making model according to the IIE process the important point to observe is the generation-to-generation (i.e., process-to-process) continuity of the responsible and integrated behavior in the IIE model. A long haul of intertemporal simulation is thus replaced by sequential simulation on a learning-by-doing basis across IIE processes.
March 15, 2010
238
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
In regards to the intergenerational continuity of the Islamic family (grandchildren relations) the Qur’an declares (52:21): “And those who believe and whose families follow them in Faith, – to them shall We join their families: nor shall We deprive them (of the fruits) of aught of their works: (Yet) is each individual in pledge for his deeds.” The exegesis of this verse is that ethical bonds enhance intergenerational family ties as the essence of unity of IIE-type preferences guided by the divine law. Furthermore, in such learning processes, the individual’s moral capacity interacts with the familial and socio-economic structures. Contrarily, the Qur’anic edict also pointed out on the consequences of the breakdown of familial ties. The Qur’an establishes this rule in reference to the wife of Prophet Lot (11:81–82) and the wife and son of Prophet Noah (11:45–46; 66:10). They were lewd persons and therefore barred from Islamic family communion. Contrarily, even though Pharaoh was the arch enemy of God, yet Pharaoh’s wife was of the truthful. Thereby, she was enjoined with the family and community of believing generations. The same is true of the blessed Mary (Qur’an, 66:11–12). In formal sense, we now drop the suffixes in Eq. (8) and generalize it for both intra- and inter-generational cases. Equation (8) can be easily symbolized for individuals, households, and heads for j-generations by a further extension by j-subscript. The method of derivation is similar to Eq. (8). Consider now the following differentiation in respect to the space-time extension of θ values: dSW/dθ > 0 ⇒ (1 − λ)(dWh /dθ) + λ(dWH /dθ) + (WH − Wh )(dλ/dθ) > 0. (9) Since, (1 − λ)(dWh /dθ) > 0; λ(dWH /dθ) > 0 due to the monotonic θ effect, therefore, the degree of positive value of Eq. (9) is determined by the sign of (WH −Wh ), with (dλ/dθ) > 0 as a shift effect in social wellbeing. If (WH −Wh ) > 0, the higher will be the positive value of dSW/dθ. Furthermore, from (WH − Wh ) we obtain: (dWH /dθ − dWh /dθ) = (∂WH /∂θ − ∂Wh /∂θ) + [(∂WH/∂Sp) − (∂Wh/∂Sp)] · (dSp/dθ).
(10)
The sign of Eq. (10) can be positive or negative. In the case of positive sign we infer that the perception of the heads of the intergenerational families on wellbeing increases more than the members’ wellbeing function as knowledge increases in the intergenerational family nexus. Thus, the cumulative result of knowledge, communion, and co-evolution is repeated intergenerationally, that is from grandfathers to grandchildren. Likewise, socio-economic consequences are similarly co-evolved. Such an ethical function allows the principal to continue on as the acclaimed head, Amir. Furthermore, since Sp(θ) is a positive function of θ in view of the ethics of the Qur’an that encourages spending in the goods things of life, but in moderation, therefore, [(∂WH /∂Sp) − (∂Wh /∂Sp)] > 0 due to the effect of increases in θ values on the heads’ higher perception of family wellbeing intergenerationally.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
239
Consequently, (dWH /dθ − dWh /dθ) > 0. But also (WH − Wh ) > 0 on the basis of the intergenerational wellbeing role of the family heads. From these two relations we obtain the expression, WH = a · Whb ,
a, b > 1.
(11)
a, b are functions of θ and λ(θ), and thereby cause shifts in Wh and WH as the intergenerational IIE processes deepen in family-socio-economic circular causation interrelations. 11. Up Winding: From the Specific to the General Neurocybernetic Model of Social Management The examples of community-business-economy extensive relationships and the intergenerational richly complex relations in the extended family along with its social and economy-wide effects are examples of social systems that learn, but only under the episteme of unity of knowledge. There is no other way how these systems can learn. That is, neither the disequilibrium perturbations along evolutionary epistemology, which are of the nature of social Darwinism, nor the methodological individualism of neo-liberalism can establish learning behavior. Such disequilibrium learning behaviors are of conflicting and non-cooperative types. Social meaning cannot be derived from such continuous perturbations. Though optimality is never a feature of the neurocybernetic learning model, yet learning equilibriums do explain purposeful social actions and responses. This is the feature of learning under unity of knowledge. While it is here exemplified by business-community-economy interrelationships and the evolutionary learning dynamics of the coherent family across generations, the inherent model of social management in this neurocybernetic sense of rich complexity but with social cohesion and order is applicable to the widest range of social and scientific problems. Neurocybernetic learning of social management model thus renders a new vision of orderly process of intercivilization and global discourse. When taken up at the core scientific level, the same methodology establishes a new way of understanding the scientific phenomenon. It points out the inexorable centricity of morality, ethics, purpose, and values existing as simulated endogenous elements in the scientific constructs of ideas. A neurocybernetic theory of social management thus conveys a reconceptualization of the socio-scientific domain in our age of post-modernism in which science is increasingly becoming a study of process and social becoming (Prigogine, 1980). It is also a thoroughly empirical and positive exercise towards social reconstruction. The combination of substantive reconceptualization and empiricism involving morality, ethics, and values endogenously in neurocybernetic models of social management together convey the episteme of the new scientific method (Choudhury and Hossain, 2007). The empirical project within the scientific research program of neurocybernetic theory of social management is not a crass number-crunching exercise. Rather, it is
March 15, 2010
240
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
one that combines deep analytical reason and selection of appropriate models that simulate knowledge of unity between all good things of life. In that perspective, the selected models and methods of emergent empirical analysis are subjected to the appropriateness of the background episteme of unity of knowledge between everything that make moral and ethical sense in human choices. The negation of this in the realm of individualism, as pointed out in this chapter also share unity between them. But this phenomenon can be shown to form bundles of independently evolving entities in their own linearly dissociated spaces. That is, eventually the long-run Darwinian tree of genesis of rationalistic life breaks up into atomistic competing and annihilating point-wise organisms. They create infinitely more replicas of such competing and annihilating organic entities. Social biological atomism is the ultimate destiny (Dawkins, 2006). In the empirical domain, selected methods that can be used to explain the evolutionary unified dynamics of the neurocybernetic theory of social management can be comprised within the broad and extended field of computational complexity (Gregersen, www.metanexus.net/tarp, 1998). Of particular methods that can be used to combine the concept and empiricism of the learning field idea of neurocybernetic theory of social management are Complex Adaptive Systems (CAS) and Autopoietic System (APS) (Gregersen, 1998; Rasch and Wolfe, 2000). In both of these methods, learning between systemic entities is essential. But the difference between them is that while CAS is sensitized by the external environmental synergy, APS extends the sensitized effects to influence organic activity in the embedded systemic entities. Organism interactions are thus broadened up. In the neurocybernetic theory of social management, the focus is not on chaos and disorder between interactions. These are considered as social disequilibrium that happen because of failure of entities to pair, cooperate, and complement continuously and across continuums of space, time, and knowledge. When such disequilibrium disorder or their endogenous bundle of separable movements, as in the dissociated nature of methodological individualism and statistical independence at points of optimality happen, then extended social disequilibrium happens as well. In the extended sense, such social disequilibrium between the organic interactions causes the same kind of social impetus in the socio-scientific universe around and within local systems. The only exception is to abide in its own self-contained character of isolation caused by methodological individualism. The episteme of unity of knowledge is abandoned. Such disequilibrium models are then simulated to attain semblances of evolutionary equilibrium by the learning systemic dynamics. This transformation process is conveyed by the simulation system of circular causation relations between the entities, variables, relations, and sub-systems. Thus, the neurocybernetic theory of social management transcends sheer computation into social reconstructions according to the balances of pairing between cooperating entities by means of pervasive complementarities between them (Luhmann, N translated by Bednarz and Baecker, 1995). These are issues of transformation and choices in the domain of institutional structures, policy, and the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
241
“wider field of social valuation”. The mathematical complementation of the unifying experience is methodological individualism, which is triggered by rationalistic behavior. In this chapter, the two cases studied are embodied with complexity within them and in the extensive sense of learning by widening complementary pairing under the guidance of laws and social behavior. Correction of experiences other than the unifying one, as in the case of neo-liberalism, is implemented to attain desired reconstructed social realities in accordance with the episteme of unity of knowledge. Community-business relations experience IIE processes within and across them to expand into economy and the global order with the necessary attenuating transformations and social reconstructions. The extended family relations extend the internal learning dynamics of the Islamic family preferences over space, time, and knowledge domains. The socially widening consequences are thereby extensive. They are felt and organized to perpetuate the episteme of unity of knowledge and the emergent world-system in the light of the Islamic epistemological practices. 12. Conclusion The absence of a unique and universal epistemological premise in social management theory, one that can guide and be beneficial and acceptable to most of humanity, remains the basis of global disorder in guidance and governance. The urgency in closing up this gap with mutual understanding would be the proper direction for developing a neurocybernetic social management model for the common wellbeing of all (Choudhury, 1996). The quest for this universal and unique epistemological worldview must be both serious and reasoned. The formalism of this chapter explains that IIE-type preferences formed in the midst of community-business relations and the family with its extended socioeconomic linkages have important circular causal meaning. Such relations in systemic unity of knowledge play a significant role in the establishment of appropriate markets rather than leaving market forces to self-interest and consumer sovereignty. The triangular relationship among individuals, the household, and the socio-economic order is continuously renewed and reproduced, giving evolutionary momentum to each of the agencies in this kind of circular causation in both intraand inter-generations. It is the same for the community-business dynamics. Hence, a universal neurocybernetic model of social management is configured to explain richly complex social phenomena taking the episteme of unity of knowledge as a phenomenon of continuously learning systems. In such complex organisms, morality, ethics, and values remain embedded. They are not numinous entities of systems. They are as much real and measurable as is any cognitive socio-scientific variable. In the neurocybernetic theory of social management systems with the universal epistemological model of unity of knowledge, the systemic endogeneity of morality, ethics, and values can be a distinct
March 15, 2010
242
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
way of formalizing, measuring, and implementing the interactively integrated and dynamic roles of these human imponderables in socio-scientific decision making (Choudhury, 1995). We surmise that on this kind of intellectual thought and its scientific viability rests the future of human wellbeing and socio-scientific global sustainability. Acknowledgements This paper was written during the author’s research leave to Trisakti University, Jakarta between May and June, 2008. The author thanks Sultan Qaboos University Postgraduate Studies and Research Department and the College of Commerce and Economics for providing this opportunity. References Ansari, J (1986). The nature of international economic organizations. In Political Economy of International Economic Organizations, Ansari, J (ed.), 3–32. Boulder, CO: Rienner. Arrow, KJ (1951). Social Choice and Individual Values. New York, NY: John Wiley & Sons. Becker, GS (1974). A theory of marriage. Journal of Political Economy II, 82(2), S11–26. Becker, GS (1981). Treatise on the Family. Cambridge, Massachussetts: Harvard University Press. Becker, GS (1989). Family. In The New Palgrave: Social Economics, J Eatwell, M Milgate and P Newman (eds.), 64–76. New York, NY: W.W. Norton. Brennan, G and J Buchanan (2000). Modeling the individual for constitutional analysis. In The Reason of Rules, Constitutional Political Economy, Brennan, G and J Buchanan (eds.), 53–74. Indianapolis, IN: Liberty Fund. Choudhury, MA (1995). A mathematical formalization of the principle of ethical endogeneity. Kybernetes: International Journal of Systems and Cybernetics, 24(5) 11–30. Choudhury, MA (1996). A theory of social systems: Family and ecology as examples. Kybernetes: International Journal of Systems and Cybernetics, 25(5), 21–38. Choudhury, MA and MS Hossain (2007). Computing Reality. Tokyo, Japan: Blue Ocean Press for Aoishima Research Institute. Cummings, JF (2006). How to Rule the World, Lessons in Conquest for the Modern Prince. Tokyo, Japan: Blue Ocean Press, Aoishima Research Institute. Dawkins, R (2006). The God Delusion. London, England: Transworld Publishers. Gregersen, NH (1998). Competitive dynamics and cultural evolution of religion and God concepts. www.metanexus.net/tarp Gregersen, NH (1998). The idea of creation and the theory of autopoietic processes. Zygon: Journal of Religion & Science, 33(3), 333–367. International Monetary Fund (1995). Our Global Neighbourhood. New York, NY: Oxford University Press. Jackson, MC (1993). Systems Methodology for the Management Systems. New York, NY: Plenum Press. Luhmann, N, (translated by J Bednarz Jr and D Baecker) (1995). Social Systems. Stanford, CA: Stanford University Press.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
243
Machiavelli, N (translated by D Dionno) (1966). The Prince. NewYork, NY: Bantam Books. Mommsen, WJ (1989). Max Weber on bureaucracy and bureaucratization: Threat to liberty and instrument of creative action. In The Political and Social Theory of Max Weber, Mommsen, WJ. (ed.), 109–120. Chicago, Illinois: The University of Chicago Press. Minogue, K (1963). The Liberal Mind. Indianapolis, IN: Liberty Fund. Prigogine, I (1980). From Being to Becoming. San Francisco, California: W.H. Freeman. Rasch, W and C Wolfe (2000). Observing Complexity: Systems Theory and Postmodernity. Minneapolis, Minnesota: University of Minnesota Press. Sardar, Z (1988). Islamic Futures, the Shape of Things to Come. Kuala Lumpur, Malaysia: Pelanduk Publications. Sen, A (1985). The moral standing of the market. In Ethics & Economics, EF Paul, FD Miller Jr and J Paul (eds.), Oxford, England: Basil Blackwell. Singer, H and JA Ansari (1988). The international financial system and the developing countries. In Rich and Poor Countries, Singer, H and JAAnsari (eds.) 269–285. London, England: Unwin Hyman. Sklair, L (2002). Transnational corporations and capitalist globalization. In Globalization, Capitalism and Its Alternatives, Sklair, L (ed.), 59–83. Oxford, England: Oxford University Press. Sztompka, P (1991). The model of social becoming. In Society in Action, the Theory of Social Becoming, Sztompka, P (ed.), 87–199. Chicago, Illinois: University of Chicago Press.
Biographical Notes Prof. Masudul Alam Choudhury’s areas of scholarly interest focuses on the epistemological treatment of mathematical models in Islamic political economy and the world-system are diverse. They span from hard-core economic and finance areas to philosophical issues. He derives the foundational reasoning from the Tawhidi (divine unity of the knowledge in the Qur’an) worldview in terms of the relationship of this precept with diverse issues and problems of the world-system. The approach is system and cybernetic oriented, addressing general systems of complex and paired circular causation relations of explanatory and parametric variables. Professor Choudhury’s publications are many and diverse. The most recent ones (2006–2008) are five volumes on Science and Epistemology in the Qur’an (The Edwin Mellen Press), each volume is differently titled. The Universal Paradigm and the Islamic World-System was published by World Scientific Publishing in 2007. In 2008, the publication with coauthor M. Shahadat Hossain is entitled, Computing Reality from Aoishima Research Institute, Japan. There are many more. He has written many international refereed journal articles. Professor Choudhury is the International Chair of Postgraduate Program in Islamic Economics and Finance at Trisakti University, Jakarta, Indonesia.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Chapter 11
Systematization Approach for Exploring Business Information Systems: Management Dimensions ALBENA ANTONOVA Faculty of Mathematics and Informatics, 125, Tzarigradsko chausse CIST, bl.2 fl.3 P.O. Box 140, 1113 Sofia University, Bulgaria a
[email protected]
Today, business information systems (BIS) has become an umbrella term that indispensably indicates more than just a main business infrastructure. Information systems have to enhance the capacity of knowledge workers to enable business organizations to operate successfully in complex and highly competitive environments. Despite the rapid advancements in technology and IT solutions, the success rate of BIS implementation is still low, according to practitioners and, academics. The effects of IT system failures and delays can be disastrous for many companies, possibly leading to bankruptcy, lost clients and market share, and diminished competitive advantage and company brand, among other things. The study of system science gained impetus after World War II, suggesting a new way of studying complex organisms and their behavior. Investigating parts of the whole is not enough if one is to understand the complex functions and relationships of a system. Business organizations are often examined through a number of their elements and subsystems-leadership and government, marketing and sales systems, operating systems, IT systems, financial and accounting systems, and many other sub-systems. However, behind every sub-system stands human beings — the employees who personalize every business processes in order to express their unique approach to deliver value. This intrinsic element of business organization — its human capital — is often underestimated when "hard" issues like information systems are introduced. Systematization proposes an approach to the study of BIS within its complex environment, considering it as an integral element for organizational survival. Planning BIS is a substantial part of a company’s strategy to succeed further while capturing, analyzing and reacting to information acquired from the environment, combining it with knowledge of internal processes and exploiting it to give customers better value. Keywords: System theory; business information systems; business organisations.
1. Introduction Nowadays, business information systems (BIS) have transformed into powerful and sophisticated technology solutions, vastly different from the standardized onthe-shelf software products designed to fulfill some operational business functions. 245
March 15, 2010
246
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
In this ever-changing and complex global environment, information technologies have become increasingly important for organizational survival, exceeding simple program applications. The way organizations produce, sell, innovate, and interact with a global and complex environment has changed. The business paradigm has shifted to more complex organizational global structures and interlinked systems. The internet and information technologies have linked all businesses — there are no longer small or big businesses, only connected and disconnected businesses. Companies are divided into businesses that are in the global economy and businesses that still survive outside it. Today, the world is more connected and complex than before. While value creation and competitive advantage are still the center of any business strategy, global competition is becoming increasingly severe. Technology has contributed to the intensification of global commerce and global trade. It has allowed for the downsizing and flattening of organizations through the outsourcing of production departments and back offices to low-wage countries. BIS has made it possible for diversified teams to collaborate and work remotely, bringing together experts from all around the world. BIS allows for the design of complex inter- and intra-organizational networks and business systems as various e-business suites and portals. As Laudon and Laudon (2006) point out, the emergent combination of information technology innovations and a changing domestic and global business environment makes IT in business even more important for managers than just a few years ago. Laudon and Laudon (2006) further enumerate the following factors, considering the growing impact of information technologies on the business organizations of today: Internet growth and technology convergence, transformation of the business enterprise and the emergence of the digital firm, growth of globally-connected, knowledge-, and information-intensive economies.
1.1. Internet Growth and Technology Convergence When considering Internet growth and technology convergence, we should think about how Web 2.0 technologies and the increasing dimension of social networks are changing the role and place of information technologies in our organizations and society as a whole. According to Alexa Internet global traffic rankings (Labrogere, 2008), 6 out of 10 of the most visited websites in 2007 were not in the top 10 in 2005, and all of them are Web 2.0 services (Yontube, Windows Live, Facebook, Orkut, Wikipedia, hi5). This gives an idea of the sound trend Web 2.0 represents. The author further points out the emerging Com 2.0 concept that applies Web 2.0 paradigms to the communication sphere and communication services, allowing users to move from a fixed to mobile environment. This comes as an example of the dynamics of IT sector, still on the way to transforming our lives, becoming smarter, more invisible, into the integrated
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
247
various products and applications around us. Information technologies develop fundamentally, constantly improving and enhancing their capacities, and thus should be assumed while designing and implementing next generation of BIS. 1.2. Transformation of the Business Enterprise and Emergence of the Digital Firm The transformation of business organizations and the emergence of the digital firm suggest that BIS should aim not only to deliver separate solutions or workflow automation. As described in Lufa and Boroac¨a (2008), modern enterprise offers a significant variety of services; it is adaptable to internal and external factors and the microeconomic decision depends on the alternative possibilities of the market and the uncertain aspects of demand that are more and more difficult to predict. BIS have to be designed to deliver real intrinsic value to changing business organizations and to limit the risks. Some of the main goals of a “smart” management system according to Lufa and Boroac¨a (2008) are to reduce the risk, to stimulate creativity, and to make people more responsible in the process of decision. Decisions in modern organizations can be considered as a process of risk decrease, because it is based on information, experience, and ideas that come from many different sources and that can be accessed and used through BIS in an efficient anticipatory way (Lufa and Boroac¨a, 2008). There exists a large number of successful business models manifesting how BIS can be transformed into a company profit unit (selling services to other departments and clients), or outsourced (in whole or partly) to other companies (like server farms), or become a strategic instrument for sustainable development (like e-commerce and e-business). Many examples of digital business models depict how IT influences innovations and company profitability. Nowadays, the BIS has a bigger impact based on the way the company formulates its sustainable strategy. 1.3. Growth of a Globally-connected, Knowledge- and Information-intensive Economy The recent banking and financial crisis (September–October 2008) has highlighted the level on which a world economies are more integrated and interrelated than ever thought. For hours, the effect on the stock markets spread all around the world and stock prices globally slowed down, leading to the sharp decrease of all stock exchange indexes. The world economy is becoming smaller, more complex, and very dynamic. So, this imposes a new paradigm for BIS — to guide businesses in dangerous and hostile environments and to map the road for further development and success. On the other hand, during last years, the global economy has shifted towards a service and knowledge-oriented economy. The GDP in most developed countries is generated increasingly by the knowledge-intensive service sector. There are
March 15, 2010
248
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
assumptions that service sectors account more than 80% of the national incomes in most developed countries (Uden and Naaranoja, 2008). The services and their specific features are in the center of our society today. Services are intangible goods that are much more complicated to deliver, to sustain, and to develop than mass products. Services are usually knowledge-intensive and demand better knowledge and information management and its integration into organizational business processes. 1.4. Emergent Characteristics of BIS Following the observed trends discussed above, the following conclusions have emerged: the constant evolution of Internet and IT technologies, global and linked complex business organizations, and an increasingly knowledge and serviceoriented global economy. Summarizing these ideas, we should consider BIS from the level of services it should provide to the organization and business. All BIS users, whatever are their roles, are not interested in specific technical tools — users want to get information and knowledge from the system, or better still, to adapt and personalize the system according to their specific and momentary needs. Users want to perform better while working, communicating, and entertaining. BIS should enable companies to respond to this challenge and provide customized, personalized, and customeroriented services. The main focus of BIS design is not on some specific functions and features of BIS technologies, but on the “integrated services and intrinsic value” that they could provide their users. In most cases, in order to perform a task well, IS should deliver not only information but rather meaningful knowledge, transforming bytes and data into appropriate answers on complex and detailed business problems. BIS have to change from reliable and open networks to large data warehouses and content management solutions. As described in a number of sources, Web 2.0, via various Internet applications, has evolved to link and provide a communication platform for user-friendly services, for internal and external sources of knowledge. Web 2.0 and social networks provide a unique platform and technological tools to allow everybody to freely express his or her individuality. As Buckley (2008) states, Web 2.0 sets users free from a closed set of navigational and functional options and thus of “normal” tasks and ways of interacting. Users exercise great freedom to label, group and tag things, and therefore to carve up and contemplate the world their way. Plenty of meaningful and important messages are hidden in personal blogs, image galleries, video-sharing programs, e-newspaper forums, and comments sections. But is it possible for all these rough and out-of-the-box ideas, messages and information to be processed adequately and timely to be produced meaningful knowledge, indicating the trends and directions for development? The BIS have to respond to this challenge, enabling and facilitating the company’s access to meaningful knowledge inside and outside the organization — past and current databases of best practices, business models,
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
249
business processes, and projects. BIS should enhance the knowledge-workers and the knowledge-intensive companies ability to survive while capturing and adapting to still-invisible signals from the environment to deliver better services and products to its clients. 2. Research Methodology of Complex Systems Exploration To gain a better understanding of the challenges facing BIS, a thorough analysis will be made of the application of system thinking to BIS. A large number of manuscripts and authors explore various aspects of complex systems, but our research methodology will focus on the review of social dimensions of system theory, referring to the main system’s components, elements, and characteristics. The aim of the research is to identify the ground theory aspects of system theory and to present the main challenges for BIS development and its changing role within dynamic business organizations. The first few sections will provide a short review of the both research methods — analytical and systematization approaches. Basic definitions will be presented further, further deepening our understanding of complexity. Section 3 tends to compare Gharajedaghi’s (1999) views of organizational systems with the evolution of BIS systems. General system theory and the main characteristics of systematic thinking according to some of the most popular authors are further discussed and presented. A table with the main system characteristics is developed, and a proposal for a systematization process is presented at the end of the chapter. 2.1. Analytical Method A traditional analytical approach describes the cognitive process as consisting of analyses and synthesis. The analytical viewpoint is employed when the emphasis is put on the constituting elements or components (Schoderbek et al., 1990). The analytical approach is defined as a process of segmenting the whole into smaller parts to better understand the functioning of the whole. Due to the limited capacities of the finite human mind, this is an appropriate scientific method to exhaust any subject by breaking it down into smaller parts. By examining thoroughly all these parts, man is believed to be able to attain a better understanding of the individual aspects of the subject. The process is completed by a final summary or synthesis by putting together all the parts of the whole. Although we knew that this technique was applied in the study of mathematic and logic since before Aristotle, analysis as a formal concept was recently formulated. This method is thoroughly described in “Discours de la Methode” by Ren´e Descartes (1596–1650), and has since been identified with the scientific method. There are many areas of knowledge, where this approach yields good results and observations. Indeed, many of the laws of nature have been discovered with this method (Schoderbek et al., 1990).
March 15, 2010
250
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
One of the main characteristics of the analytical approach is that elements are independent. Discovering a simple mechanic system is possible and appropriate to this methodology. This approach suggests that the whole can be divided into elements or parts, then thoroughly studied and described and then again summarized. The environment is passive and the created system is merely closed, rigid and stable. Any mechanical system (transaction-processing systems (TPS), for example) can be thoroughly researched via the analytical method, as the systems are closed and limited, with little interdependence between its constituting elements. However, the new technologies have evolved and become smarter and harder to cover by simple summary of business applications as stated in Laudon and Laudon (2006). Further evolution of IS and lack of possibility to predict and adapt to the next technology shift have made us very careful when employing this approach to study BIS. 2.2. Systematization Approach The system approach contrasts with the analytical method. It aims to overcome the limitations of the analytical approach, identifying many complex social, economic, and living organisms that cannot be explored and understood via the analytical framework. The notion of “system” gives us a holistic approach to study and research the object as a whole, because the mutual interactions of its parts and elements and its interdependencies create new important and distinctive properties possibly absent in any of its parts or elements (Schoderbek et al., 1990). Today our world represents an organized complexity, defined by its elements, their attributes, and interactions among the elements and the degree of organization inherent in the system (Schoderbek et al., 1990). Whatever economic, natural, political, or social organization we envisage, we always talk about systems. As Dixon (2006) further points out, the universe is itself a system made up of many subsystems — a hierarchy of nested systems. . . . In any context, it is infeasible to attempt to reach a comprehensive understanding of all things. Even the global system contains too many variables to achieve good understanding of all the possible actors and interactions. It is necessary, therefore, to focus the study of a system and its context to that which is feasible: to frame the system in such a way that an understanding is possible. Once a system is framed, it is possible to explore the interactions within that portion of the greater system, keeping in mind that the system frame is an artificial construct. . . . (Dixon, 2006).
Companies and business organizations need to gain a timely and deep understanding of things happening outside and inside the organizations. Information produced daily increases and accelerates as the global environment transfers much more information than before. Nowadays, technologies provide the opportunity for billions of people to become content creators, when writing blogs, taking part in professional forums or sharing images and video with their friends. The information load has been increasing tremendously in recent years and this is expected to
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
251
continue. Often companies miss opportunities or cannot identify treads in time due to the increased complexity of messages coming from the environment. Considering BIS as service-oriented technologies, we should focus on the BIS’s functions and capacities to bypass a mechanistic system view and transform into a simultaneously evolving, adapting, and expanding flexible organizational “nervous system.” This means that BIS should transform into intelligent networks of references and alerts, allowing users to get fast access to needed information as well to further reading and details, links to experts, and other actors within and outside the organizations. 2.3. Some Definitions The general accepted definition of “system” states that: “The system is defined as a set of objects, together with their relationships between objects and between their attributes, and to their environment so as to form a whole.” (Schoderbek et al., 1990) Laudon and Laudon (2006) provide a technical definition of the “information system” as “. . . a set of interrelated components that collects (or retrieves), processes, stores, and distributes information to support decision-making and control in an organization. . . .” It further continues that IS supports decision-making, coordination and control, enabling managers and workers to analyze problems, visualize complex subjects, and create new products. A “service system,” as defined by IBM (2007) is the “. . . dynamic, value co-creating configuration of resources, including people, technologies, organizations and shared information (language, laws, measures, and methods), all connected internally and externally by value propositions with the aim to consistently and profitably meet the customer’s needs better than competing alternatives. . . .” Another definition proposed by Coakes et al. (2004) states that “a system is more than a simple collection of components, as properties “emerge” when the components of systems are comprised and combined.” The authors describe that systems are determined by their boundaries, discovering properties such as emergence and holism, interdependent hierarchical structures, transformation and communication, and control. When considering the nature and properties of any system, care should be taken when looking at the components of the system in isolation. These parts, or subsystems interact, or are “interdependent,” and so need to be considered as a whole or “holistically.” In addition, there is likely to be a discernible structure in the way subsystems are arranged — in a hierarchy. Finally, there needs to be established communication and control with the system, and it has to perform some transformation process (Coakes et al., 2004). BIS should be designed and integrated as whole general organizational systems and not just to be considered as sum of processing functions, databases, and networks. The system approach lets us think about BIS as an ever-evolving system, representing one of the major and most important (together with human resource
March 15, 2010
252
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
systems) subsystems of the organization. Information systems should not be considered as a subsidiary and cost-based function of the organization that merely accounts and records past data. BIS technologies can provide the company with the tools, understanding and vision to improve the service and value that it delivers to its customers. Systems theory focuses on complexity and interdependence. By doing so, it tries to capture and explain phenomena and principles of different complex systems by using consistent definitions and tools. It has a strong philosophical dimension, because it results in unusual perspectives when applied to the human mind and society. It is important to mention that systems theory aims at illustrating and explaining interrelations and connections between different aspects of reality, and not the realization of systems. 2.4. Complexity Complexity is all around us. A system is said to be complex if the variables within the system are interdependent (Dixon, 2006). The links (relationships) between variables ensure that a change (an action or event) in one part of the system has an effect on another part of that system. This effect, in turn, may be the causal event or action that affects some other (or multiple) variables within the system. Durlauf (2005) defines complex systems as “comprised of a set of heterogeneous agents whose behavior is interdependent and may be described as a stochastic process.” Complexity involves the increasing number of interacting elements affecting and influencing the system. In a system where new entities enter the system daily and other entities disappear, gaining a clear understanding of the structure of the system is extraordinarily difficult (Dixon, 2006). Complexity, however, does not simply concern the system’s structure. The focus is not on the quantity of the actors and components within the system but on the quality of its relationships (Dixon, 2006). Systems possessing a great number of agents with different components and attributes are complicated, but not necessarily complex. Today we face a complex environment full of interdependent variables, constantly changing subsystems and variables, new appearing and disappearing elements and attributes. Understanding and managing systems has emerged as a key challenge for organizations because of the increased interaction between various agents within the system and with the complex environment. BIS should provide us with sufficient capacity and the instruments to understand, obtain knowledge from, and cope successfully with increasing uncertainly and risks in an ever-changing global environment. Metcalfe (2005) points out that from complex environments emerge changes that by definition are impossible to fully appreciate. An organizational structure is only reasonable if it is appropriate to its environment. If the environment changes, then the organization needs to change. Re-organization can be seen as reconnecting people, resources, and technologies in line with the new environment. If the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
253
changes are rapid or the organizational structure is complex, a central administration often has neither the communications capacity nor the technical expertise to provide hands-on coordination of all these reconnections. Attempts to do so are likely to cause bottlenecks of information flows and prevent an appropriate response. Therefore, Metcalfe (2005) proposes a vision for knowledge-sharing networks that anticipates environmental change. BIS should provide information and communication that fully identifies those changes, and then to assists with the process of reorganization. 3. Evolution of the Social Systems and Organizations To understand better the challenges and the complexity of evolving business organizations, we will shortly review the evolution of business organizational systems as presented by Gharajedaghi (1999). In his book Systems Thinking: Managing Chaos and Complexity, the author describes three system models of an organization. He develops the following models according to the idea that man should think about something “similar, simpler, and familiar” in order to understand complex systems. While discovering organizations as evolving systems, we will use the analogy to a better comprehend the BIS. 3.1. Mechanistic View — Organization as Machine The first model describes the idea of the mindless system: the “mechanistic view” that has became a widely accepted concept after the Renaissance in France. The popular vision is that the universe is a machine that works according to its structure and causal laws of nature gave birth to the Industrial Revolution. The mechanistic view was transposed to the birth of organizations — structures of people organized around the principle that everyone performs only a simple task as mechanism from a complex machine. The machine mode supposes that the organization is a simple system — it is a tool with a function defined by the user with no set purpose. Mainly the organization is regarded as an instrument for the owner to make profit. The only important attribute of this system is its “reliability,” evoking other characteristics such as control, tidiness, efficiency, and predictability. The structure of the mechanic system is designed to be stable and unchanging. This type of organization can operate effectively in a stable environment or if it has little interactions with the environment (Gharajedaghi, 1999). The mechanistic view is fully applicable to actual IT applications and BIS. IT systems still are defined as tools that performs some pre-determined functions, designed to operate in a closed, stable, and finite environment. BIS have to be designed as reliable, controllable, efficient, and predictable information systems. But nowadays, business organizations’ and environments changes have accelerated, and new technologies have increasingly evolved. One of the main challenges within organizations nowadays is the successful integration and interoperability of
March 15, 2010
254
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
different existing information systems. But, is it possible to predefine various mechanistic systems to merge and provide sophisticated solutions to complex queries and questions? The present shows that businesses need integrated BIS, that pass through the whole organization and communicate with a complex environment, while performing non-linear functions. 3.2. Biological View — Organization as Living System Gharajedaghi’s second model is of the uniminded system, or a biological view of the organization. The uniminded system considers the organization as a living system that has the clear purpose of survival. In the biological analogy, living systems can survive if they grow and become stronger, exploiting their environment to achieve a positive metabolism. In this type of system, growth is a measure of success and the profit is only a means of achieving it. It is important to stress that in contrast to mechanic organization, the biological organization views profit only as a means towards success. This is the model of multinational conglomerate organizations that tend to expand and explore new marketing opportunities while achieving economies of scale. Although the uniminded system has choice and can react freely, its parts do not — they operate on cybernetic principles as a homeostatic system, reacting to the information as thermostats. The parts of the whole react in a predefined manner and do not have a choice. In living organisms, the parts of the body work in coherence without a consciousness or conflict. The operation of the uniminded system is completely controled by a single brain, and execution malfunctions only when these are problems with communication or information channels. The main idea is that the elements of the system do not have a choice if there appears to be no conflicts in the system. As long as “paternalism” was the dominant culture and imperatives like “father knows best” were appropriate ways to resolve conflicts, uniminded organizations functioned successfully (Gharajedaghi, 1999). Can we imagine a similarity between BIS and uniminded living systems? In principle, BIS tend not to survive as they are static and do not expand in the environment. Most importantly, they do not possess an autonomous purpose. People are still horrified by independent robots in science fiction — humanoid machines processing and acting like living organisms. Employing a common metaphor, we may compare BIS to the nervous system within a living organism. The nervous system is a very complex and important subsystem, allowing the fast gathering and understanding of signals and messages, while observing the environment, transmitting information, around various subsystems and coordinating actions in different parts of the organism. The nervous system controls the activities of the brain and the whole body. In the same way that BIS enables the flow of information within the organization and supports information and knowledge management and subsystem coordination. However, uniminded systems tend not to evolve as they are limited to their predefined purpose.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
255
3.3. Multiminded View — Organization as Sociocultural System An example of a third level multiminded system is the social organization. The sociocultural view organization as a voluntary collective of purposeful members, who may choose their ends and means. The behavior of this system is much more complicated and unpredictable than in the previous two models. The purposeful organization is much a deeper concept than that of the goal-oriented organization. In a social system, the elements are information-bonded. The elements are connected via a common culture and compromise is one of the main methods for managing it. As defined further by Gharajedaghi, business organizations are complex sociocultural systems, representing voluntary associations of purposeful members, who have come together to serve themselves by serving a need in the environment. Sociocultural systems are held together by information. Communication flows maintain the bonds among individuals and between the organization and its members. Nowadays, organizations and systems are becoming increasingly interdependent. On the other hand, their elements and parts tend to be more autonomous, displaying choice and behaving independently, less predictable and programmable. BIS should allow these processes, connecting and facilitating information- and knowledge-sharing inside and outside the organization and systems. On the other hand, BIS has to enable a coherent culture and purposefulness that will bring together all independent and self-directed elements. The recent evolution of Internet facilities and technologies enabling social communications as wikis, personal blogs, social networks, groupware, and forums has still not been thoroughly researched in an organizational context. One important thing that has happened is the transformation of passive information users into active content providers. By expressing themselves, people become members of specific Internet communities, which in turn are self-directed and self-organized sociocultural systems. Many organizations have adopted and encouraged the emergence of virtual communities of practice (CoPs), representing virtual meetings of experts working on the same problems within different parts of the organization. These CoPs can include employees and managers from different parts of the organization, as well as customers, suppliers, or external experts to achieve better results on an identified problem. Social networking tools within organizations can become a powerful instrument to form a common culture and an understanding among self-directed, purposeful individuals. 3.4. Summary In his book, Gharajedaghi (1999) presents three views of organizational systems. Discovering the evolution of the corporate organizations, the author provides us with considerations that we applied to BIS.After reviewing the mechanistic approach, we may conclude that BIS still belongs to the class of mechanistic or machine systems.
March 15, 2010
256
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
They are simply designed to act as tools (information tools) and can be characterized as controllable, stable, reliable, and efficient organizational instruments. Another aspect of BIS is that they can be compared to an organizational nervous system, providing signals and information and coordinating all the organizational processes around it. If an organization is comparable to a living organism, then BIS should enable its vision and hearing, its movements and reactions. Lastly, organizations tend to be sociocultural systems, composed of independent purposeful members. BIS should evolve further along these dimensions, enabling companies to respond to the increasing interests of users and employees to connect and interact virtually. 4. Models for System Exploration 4.1. General System Theory General system theory appeared as an attempt to explain the common principles of all systems in all fields of science. The term derives from von Bertalanffy’s book titled General System Theory (GST). His intent was to use the word “system” to describe the principles that are common to all systems. He writes: . . . there exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relationships or “forces” between them. It seems legitimate to ask for a theory, not of systems of a more or less special kind, but of universal principles applying to systems in general. . . .
The GST formulates the “systems thinking” approach incorporating 10 tenets (von Bertalanffy, 1974) (Fig. 1): 1. Interrelationship and interdependence of objects and their attributes: independent elements can never constitute a system. 2. Holism — the system is studied as a whole, not dividing it or analyzing it further. 3. Goal seeking — systemic interaction must result in some goal or final stable state. 4. Inputs and outputs — in a closed system inputs are determined once and constant; in an open system additional inputs are admitted from the environment. 5. Transformation of inputs into outputs — this is the process by which the goals are obtained. 6. Entropy — the amount of disorder or randomness present in any system. 7. Regulation — a feedback mechanism is necessary for the system to operate predictably. 8. Hierarchy — complex wholes are made up of smaller subsystems. 9. Differentiation — specialized units perform specialized functions. 10. Equifinality — alternative ways of attaining the same objectives (convergence).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
257
Interdependence Equifinality
Holism
Differentiation
Goal seeking
BIS Inputs and outputs
Hierarchy
Transformation process
Regulation Entropy
Figure 1.
Characteristics of GST and BIS.
4.2. Characteristics of the Systematic Thinking To gain a better understanding of the system approach, we will present and compare some of the main characteristics of systematic thinking. The integrative systems approach is the basis for much research on BIS and management support systems (Clark et al., 2007). Yourdon (1989) discussed the application of the following four general systems theory principles to the field of information systems: • Principle 1: The more specialized or complex a system, the less adaptable it is to changing environments. • Principle 2: The larger the system, the larger the amount of resources required to support that system, with the increase being nonlinear. • Principle 3: Systems often contain other systems, and are in themselves components of larger systems. • Principle 4: Systems grow, with obvious implications for Principle 2. The following subsections present in detail the five basic considerations concerning systematic thinking proposed by Churchman (1968): 1. 2. 3. 4. 5.
Objectives of the whole system The system’s environment The resources of the system The components of the system The management of the system
March 15, 2010
258
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
4.2.1. Objectives The objectives represent the ultimate goals or ends toward which the system tends. Objectives should be measurable and operationalized as they have to be defined by measures of identifiable and repeatable operations. Objectives should determine the system’s performance and effectiveness. For mechanical systems, objectives can be determined easily, while for human systems this is not always true. All systems have some main objectives and this could be described as the main direction for further development. BIS are strategic tools, as new technologies change the way people think, work, and live. Technologies are expected now to be everywhere, to be embedded in various applications, invisible but reliable and stable and providing a complex and integrated service. Regardless of the type of organization, information technologies are one of the main strategic tools for any further organizational development. Information systems assist companies in all internal and external business processes, facilitating main business functions and providing additional value to the business model. There are a number of ways and business models by which an organization sells, buys, produces, or delivers a service, assisted or facilitated by information technologies. BIS are designed to provide a service — to store information, to make calculus, to model a simulation, and to deliver a message. Even more, BIS can sell your product, can make an automatic order to your supplier, deliver timely information about the production phase in remote office, deliver a payment, and find a specific record from the past using one key word. BIS are mobile, integrated, complex and evolving, securing access to specific information and resources. In order to properly design and consider BIS, one should decide what their main objectives are, or more specifically, what services BIS shall provide to its users, to the organization, and to the environment. 4.2.2. Environment The environment represents everything outside the boundaries of the system. Churchman (1964) further identifies two main features of the environment — control and determination. Control examines how much influence the system has on its environment. In BIS language, this can be interpreted as how much information and valuable knowledge is produced and transmitted to the environment. In the second factor, determination evaluates how the environment affects the system’s performance (Fig. 2). The environment is becoming the main source of instability and competition for value creation. Nowadays, the environment is a rich source of information and knowledge as well as threats and competition coming from an increasing number of active agents. Active systems have to attain an appropriate level of understanding of the various factors affecting business future development and complexity.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
Figure 2. factor.
259
Environment and Internet emerging as stand-alone and increasing external
BIS should enable companies to identify, receive and process messages coming from the environment. With new technologies, the access and availability of information has changed, and the complexity of information sources is tremendous. All information users are at the same time information providers. Blogs, wikis, social networks, Web 2.0 technologies, all media encourage personal involvement, comments, links, and active feedback. An increasing amount of text, video, sounds, and images is created every day and published on the Internet. Is our organization missing something important? How do we cope with and process information from one very distinct, complex, special and ever-growing environment — Internet? We live in an information-rich, but knowledge-poor environment. Any organization usually lacks a time, a methodology and a systematic approach to cope with information coming from the environment and to transform it to valuable knowledge, embedding it afterwards in innovations, products, and services. BIS should enable organizations to systematize and to better understand the complexity of the environment. 4.2.3. Resources Resources are the instruments and the means available to the system, to execute its goal or objective. Resources can be either inside the system (employees) or outside it (external collaborators) and they represent all complex material and intangible assets (know-how, strategic alliances, etc.) that the system can process further. One important feature that has to be examined within the system is non-utilized resources, or lost opportunities, unrecognized and unexploited challenges and the lack of appropriate management of the resources. It should be emphasized that in the knowledge-driven economy, people become the main organizational resource.
March 15, 2010
260
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
Nowadays, people have become self-directed agents, recognized for their unique combination of theoretical background and acquired experience. Empowering people to perform better (and not wasting their effort, time, and knowledge) can enable the company to perform better and to improve the value of the services offered to clients. BIS should enhance people’s performance, while creating, processing, storing, searching, and transmitting information and knowledge. Nowadays information technologies have to be constructed along the way to improve human performance. As stated in Kikuchi (2008), in the future we have to expect the age of “prosumers” (playing the roles of producer and consumer at the same). This would be true with both B2B and B2C businesses. As companies work to shape business processes and workflows around this new concept, BIS will have a major influence. 4.2.4. Components Components are all those activities that contribute towards the realization of the system’s objectives. By components, Churchman means tasks that the system must perform to realize its objectives. Components are all those activities that contribute to the realization of the system objectives. When exploring BIS, the components of the system (or system activities) should be clearly defined in terms of services (what service the system should provide) and knowledge-processing activities (how knowledge is created, stored, processed, and distributed within the system). The BIS should deliver meaningful and value-adding services to its users, adding much more context, personalization, and knowledge to better serve its users. The service approach has various important considerations. Service values are determined by capricious and instable customers and markets and are changing constantly. It is even more crucial than ever that the companies’ activities and processes enable a wide variety of outside business partners and collaborators, as well as markets and customers, to actively capture market/customer changes (Uden, 2008). 4.2.5. Management By system management, Churchman (1968) defines two basic functions — planning and control. In systems architecture, the system design phase is one of the most important for the system’s overall functioning and success (Dixon, 2006). Management consists of systematized activities to design the system, and to plan and control determined constraints. Planning the system involves all the system’s aspects — the determination of its goals, environment, resources, and components. Controlling the system includes both the examination and review of plans and further making any necessary changes. Centralization usually means standardized, hierarchical, knowledge-sharing procedures, which slow down, average out, and often over-simplify information to the extent it can be misleading.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
261
The complexity within a system obliges system designers/planners to take into account many features simultaneously, making it difficult or impossible to commit to a single action or to expect a single outcome. BIS can help model the complexity to reduce risks for the specific elements of the system and to mitigate their influence. Realization of such an organization requires management to induce the creation of new ideas and intellectual collaborations beyond organizational boundaries. This leads to the argument that open and flat virtual corporations shall be required to create new service power (Uden, 2008). 4.3. Five System Principles (Gharajedaghi, 1999) As proposed by Gharajedaghi (1999), five important system principles are presented below. They affect the system thinking of BIS as they extend the vision of the organization as unique complex system. The following characteristics give us further insight on the systematic approach. 4.3.1. Openness Openness refers to the fact that living systems constantly interact with their environment. The behavior of living systems can only be understood in terms of their context or containing environment. In terms of energy, a system requires interaction outside its structure to maintain itself. The energies are continually used to maintain the relationship of the parts and keep them from collapsing into decay. This is a dynamic state, not a dead and inert one. Only through an understanding of the system’s interaction (i.e., the transmission of energy) with the environment can the behavior of the system be understood. The environment grows more and more difficult to predict despite efforts to do so. Despite the difficulties of controlling the environment, organizations can influence it, creating a transactional environment. Leadership is about influencing what cannot be controlled and appreciating what cannot be influenced. Culture is the default value in any social system that allows it to reproduce the same order over and over again. 4.3.2. Purposefulness Choice, which depends on rationality, emotion, and culture, is the primary idea behind this principle. Attempts to understand purpose are attempts to understand why systems behave the way they do. Purposeful systems can change their ends — they can prefer one future over another regardless of whether their environment changes or not. (This idea foreshadows a basic working principle of design: when we design, we assume that the old system is destroyed but the environment remains unchanged, and that the new system represents the explicit desires or choices of the designers.) Human systems also have purpose. To truly understand a system, it is necessary to understand why they behave the way they do. People typically do
March 15, 2010
262
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
not act without reason, and regardless of how irrational a behavior appears to the observer, the rationale of the actor must be explored and understood if the observer is to comprehend the system. 4.3.3. Multidimensionality The principle of multidimensionality suggests that variables in a system have multiple characteristics, some of which may appear to be contradictory. Multidimensionality is the “ability to see complementary relations in opposing tendencies and to create feasible wholes with unfeasible parts.” “Multidimensionality maintains that opposing tendencies not only coexist and interact, but also form a complementary relationship.” 4.3.4. Emergent property Emergent properties are formed by relationships within the system. As these properties are not a sum of parts, but a product of interactions, these properties do not lend themselves to analysis, do not yield to causal explanation, and are often unpredictable. This is the characteristic that gives systems the ability to be greater than the sum of their parts. When interacting parts are compatible, the interactivity between them is reinforcing, and the resulting energy produced is significantly greater than either part could produce on its own. Conversely, when interacting parts are incompatible, the product produced by their interaction is less than either part could produce independently. 4.3.5. Counterintuitiveness “Counterintuitiveness means that actions intended to produce a desired outcome may, in fact, generate opposite results.” This characteristic is present in nearly every human system, but seems to manifest itself in greater magnitude during crisis situations. Delays and multiple effects are two primary reasons counterintuitiveness is so prevalent. Delays occur when time and space separate cause and effect. An action taken at a given time and place may have a non-immediate impact, and the delay gives the actor or observer the impression that either nothing happened, or that the effects were singular. In complex systems, effects are rarely singular and second or greater order effects may initially go unnoticed if they occur at a different time or place. We can all think of a time when a plan or strategy backfired. Classical concepts of cause and effect come into question because of delayed effects, circular dependencies, multiple effects of a single event, and durability or resistance of effects to change. Predicting outcomes means modeling the complex and dynamic delays, dependencies, multiple effects, and durability of the set of interacting factors. Prediction is, at best, an uncertain science (Gharajedaghi, 1999).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
263
4.4. Principles of Systems Thinking According to Bennet and Bennet (2004) Bennet and Bennet (2004) define the following overall general system principles or system rules, contributing to understand the system thinking. 4.4.1. Structure is a key to system behavior Useful insights and understanding of how an organization (system) behaves can be derived from the system’s structure. System thinking suggests that understanding the system’s structure allows us to understand and predict the behavior of individual elements and their relationships. BIS structures should complement and enhance sociocultural structures in the company, while facilitating cooperation and knowledge flows. 4.4.2. Systems that survive tend to be more complex This principle derives from the environment becoming more complex. The system with most options, variety, and flexibility is the one most likely to dominate and survive. This principle is important concerning the BIS conceptualization. 4.4.3. Boundaries can become barriers It usually takes more time and energy to send information or communicate through a boundary than within a system. This principle is increasingly important when considering the planning of BIS and any organizational divisions. Another important aspect is boundary protection that should be carefully managed as a natural phenomenan of the system. 4.4.4. Systems can have many structures Systems often exist within other systems and all levels usually have different purposes or objectives. Recognizing complex structures is vital for management, information supply, and control activities within BIS. 4.4.5. Intervene in systems very carefully Systems are complex structures, and any changes have to be carefully planned and undertaken. Sometimes small changes create big results, but more often than not, big changes have very little impact. Usually, the work is done through the informal network, giving it a vital role in the organization’s performance. Informal systems should always be considered when making changes in organizations. This aspect is increasingly important in the implementation of integrated organizational Information system.
March 15, 2010
264
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
5. System Framework for BIS and Summary of the Systematization Approach The theoretical study of systems enables us to look beyond the limits of the technologies and to figure out the vision of the next BIS. To summarize our understanding of systems in general, an overview of main system characteristics is presented in Table 1. It is interesting to note that almost all authors outline the same aspects of the system properties. Although some of the terms in the table slightly differ, we can state that an overall summary of the systematization approach is provided. The information systems cannot be examined via analytical method due to its limitation to study their complex nature. Hence the reason for proposing a framework for the BIS systematization process which (Fig. 3), emphasizes the system services that BIS is expected to provide for the organization. The overview of system characteristics proposes a sound understanding of the real processes one should take into account when conceptualizing a new BIS. The management role is to determine the strategic level of complexity and the context of BIS related to specific business organizational needs. Considering the management approach, BIS have to be examined from the point of view of evolving valueadding services designed for users within and outside organization (Fig. 3). The systematic theory enables focus-oriented decision makers to think about BIS beyond its mechanical aspect and to expand it on the level of sociocultural systems. 6. Conclusions and Next Steps As Senge (2006) states, system thinking is a “discipline for seeing wholes.” He continues that it is a framework to seeing interrelationships rather than things, for seeing patterns of change rather than static “snapshots.” System thinking becomes increasingly important as the world becomes more and more complex, organizations are being overwhelmed with information, and interdependency is far more complicated than anyone can manage. Nowadays, complex business organizations require even more sophisticated and complex BIS. Technologies enter quite fast into our offices and homes, changing irreversibly our habits, our behavior, and our culture. However, BIS is not about technologies. BIS concerns the future of our organizations. Systematization and system thinking give us directions to consider the design and development of “whole” encompassing patterns of future business models and new business paradigms, providing new knowledge-intensive channels. BIS will expand further, preparing the organizations for the Web 2.0 era and even anticipating Com 2.0 era (Labrogere, 2008). The Web 2.0 philosophy is the “Internet of Services,” where all people, machines, and goods will have access to Web 2.0 by leveraging better network infrastructure. BIS should equip business organizations with new instruments and tools to better conceptualize information and knowledge perceived from the environment, to process it faster and efficiently,
March 15, 2010 14:44
(1968)
Objectives of the whole system Resources of the system
Gharajedaghi (1999)
Coakes et al., (2004)
Bennet and Bennet (2004)
Goal-seeking Holism-
Purposefulness
Holism
Purpose System
Interrelationship and interdependence of objects and their attributes Inputs and outputs Transformation of inputs into outputs Regulation
Emergent property
Interdependence and emergence
Inputs and outputs
Multidimensionality
Transformation Feedback and regulation Environment
Hierarchy Boundary
Structure System boundary
Openness Hierarchy Differentiation Equifinality Entropy
Counterintuitiveness
b778-ch11
Communication and control
SPI-b778
Components of the system Management of the system Systems environment
von Bertalanffy (1974)
WSPC/Trim Size: 9.75in x 6.5in
Churchman
Summary of Systems Characteristics.
Systematization Approach for Exploring Business Information Systems
Table 1.
265
March 15, 2010
266
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
Figure 3.
Systematization process.
and react to it in a timely way. The Com 2.0 term goes further with the Web 2.0 principles putting an accent on the mobility issues. BIS have to enable further transformation and adaptation of the services demanded from IT, exploring the next level of submitting new training and encouraging knowledge sharing. The image of the systematization process (Fig. 3) depicts the complex role of BIS and its place as an knowledge gatherer, processor, distributor, and important strategic mediator within the new business organization. The present research focuses on general system theoretical background, making an attempt to summarize the conceptual visions for system development. The limitations of this approach concern the specific aspects of the information systems, combining both technological and social-organizational characteristics. The chapter as well does not present systematic thinking from an epistemological point of view, but rather focuses on a more pragmatic organizational and IT-centered approach, looking to get a better understanding on how to cope with evolving and complex information systems. Information systems are unique technological assets along with organizational knowledge and human resources that can strategically influence a company’s development. Further research should focus on system theory and the Internet (can Internet be explored from the point of view of system approach?). The Internet and Web 2.0
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
267
will soon offer services for all areas of life and business, entertainment, and social contacts. Those services will require a complex service infrastructure including service delivery platforms, that brings together demand and supply, requirs new business models and approaches to systematic and community-based innovation. The BIS should enable companies to prepare for this coming major shift to the Web 2.0 business. Matching coherently these technological and managerial perspectives in an ever-growing complex environment is the next challenge for the system theory. References Bennet, A and D Bennet (2004). Organizational Survival in the New World, The Intelligent Complex Adaptive Systems. Burlington: Elsevier. von Bertalanffy, L. (1974). Perspectives on General System Theory. New York: George Braziller. Buckley, N (2008). Web 2.0 and the “naming of parts.” International Journal of Market Research, 50 (5: Web 2.0 Special Issue). Churchman, CW (1968). The System Approach. New York: Delacorte Press. Clark, T, M Jones and C Armstrong (2007). The Dynamic structure of management support systems: Theory development, research focus, and direction. MIS Quarterly, 31(3), 579–615. Coakes, E, B Lehaney, S Clarke and G Jack (2004). Beyond Knowledge Management. IGP. Dixon, R (2006). Systems Thinking for Integrated Operations: Introducing a Systemic Approach to Operational Art for Disaster Relief. Fort Leavenworth, KS: US Army Command and General Staff College. Durlauf, S (2005). Complexity and empirical economics. The Economic Journal, 115(504), 225–243. Gharajedaghi, J (1999). Systems Thinking: Managing Chaos and Complexity: A Platform for Designing Business Architecture. Elsevier. IBM (2007). Succeeding through service innovation: Developing a service perspective on economic growth and prosperity. Cambridge Service Science, Management and Engineering Symposium, 14–15 July 2007, Moller Centre, Churchill College, Cambridge, UK. Labrogere, P (2008). Com 2.0: A path towards web communicating applications. Bell Labs Technical Journal, 13(2), 19–24. Laudon, K and J Laudon (2006). Management Information Systems, 9th edn., New Jersey: Pearson. Lufa, M and L Boroac¨a (2008). The balance problem for a deterministic model. International Journal of Computers, Communications & Control, III (Suppl. issue). In Proceedings of ICCCC 2008, 381–386. Metcalfe, M (2005). Knowledge sharing, complex environments and small-worlds. Human Systems Management, 24, 185–195. Schoderbek, P, C Schoderbek and A Kefalas (1990). Management Systems, Conceptual Considerations, 4th edn., Boston: BPI. Senge, P (2006). The Fifth Discipline, The Art and Practice of the Learning Organization. New York: Doubleday.
March 15, 2010
268
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
Uden, L and M Naaranoja (2008). Service Innovation by SME. In Proceedings of KMO Conference 2008, Vaasa. Yourdon, E (1989). Modern Structured Analysis. Englewood Cliffs: Prentice Hall.
Biographical Note Mrs. Albena Antonova is a PhD student and junior researcher within the Center of IST, Faculty of Mathematics and Informatics, Sofia University, Bulgaria. The main topic of her PhD thesis is knowledge management systems. She received her master degree in Business Administration from University of Nantes, France. Currently, she is involved in several international RTD projects concerning knowledge management, e-business, e-learning, and others. Mrs. Antonova is an assistant lecturer on classes on knowledge management, MIS, and Project management in Sofia University.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
Chapter 12
A Structure for Knowledge Management Systems Assessment and Audit JOAO PEDRO ALBINO∗ , NICOLAU REINHARD† and SILVINA SANTANA‡ ∗ Department of Computer Science, School of Science, Sao Paulo State University-UNESP, Av. Luiz Edmundo Coube, 14-01-17033-360 - Bauru – SP, Brazil
[email protected] † Department of Management, University of Sao Paulo-USP, Av. Luciano Gualberto, 908, Room G-12-05508-900 – S˜ao Paulo – SP, Brazil
[email protected] ‡ Department of Economy, Management and Industrial Engineering, University of Aveiro, Campo Universitario de Santiago-3810-193 – Aveiro, Portugal
[email protected]
Knowledge Management Systems (KMS) seek to offer a framework to stimulate the sharing of the intellectual capital of an organization so that the resources invested in time and technology can be effectively utilized. Recent research has shown that some businesses invest thousands of dollars to establish knowledge management (KM) processes in their organizations. Others are still in the initial phase of introduction, and many of them would like to embark on such projects. It can be observed, however, that the great majority of such initiatives have not delivered the returns hoped for, since the greatest emphasis is given to questions of technology and to the methodologies of KM projects. In this study, we call attention to an emerging problem which recent studies of the phenomenon of knowledge sharing have not sufficiently addressed: the difficulties and efforts of organizations in identifying their centers of knowledge, in developing and implementing KM projects, and in utilizing them effectively. Thus, the objective of this chapter is to propose a framework to evaluate the present state of an organization’s processes and activities and identify which information and communication technologies (ICT) are supporting these initiatives, with the intention of diagnosing its real need for KM. Another objective of this instrument is to create a base of knowledge, with all the evaluations undertaken in organizations in different sectors and areas of specialization available to all participants in the process, as a way of sharing knowledge for continual improvement
269
March 15, 2010
270
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al. and dissemination of the best practices. About 30 companies took part in the first phase of investigation in 2008, and the knowledge base is under construction. Keywords: Knowledge management audit; knowledge management benchmark; collaborative benchmarking; knowledge management audit tool.
1. Introduction Much research and discussion has taken place regarding the important role of the knowledge of organizations. Confronting a very complex setting in the corporate world and in society, in general, we see that economic and social phenomena of worldwide reach are responsible for the restructuring of the business environment. The globalization of the economy, and above all, the dynamics afforded by information and communication technologies (ICT), present a reality which modern organizations cannot fight. It is in this context that Knowledge Management (KM) is transformed into a valuable strategic resource. The creation and implementation of processes which generate, store, manage, and disseminate knowledge represent the newest challenge to be faced by companies. Knowledge Management Systems (KMS) seek to offer a framework to stimulate the sharing of the intellectual capital of an organization so that the resources invested in time and technology can be effectively utilized. A research undertaken with a sample of 200 Brazilian executives from large organizations revealed that there have been advances in this area, since the companies possess reasonable perceptions of the importance of KM; however there remain gaps to be overcome (E-Consulting Corps, 2004). Other recent studies show that certain organizations remain in the initial stage of the process of development and implementation of KM projects, and that many would like to get started with such projects (Serrano Filho and Fialho, 2006). Many companies have dedicated efforts and invested considerable financial resources and time to implement KM and to motivate the educational evolution of their personnel, without, however, obtaining the results hoped for, or even obtaining adequate returns on the resources invested (Albino and Reinhard, 2005). With all of the current technological apparatus, we live in an organizational environment where the transfer of knowledge and the exchange of information are efficient and where there exist numerous organizations already working with processes to collect and transfer the best practices. However, there is a great difference between what the companies know and what they effectively put in practice (action). There exist gaps between what the companies know and what they do, and the causes of this gap are still not totally understood (O’Dell and Grayson, 1998). According to Keyes (2006), it becomes necessary to use tools which can guide the centers of knowledge to the areas which effectively demand greater attention,
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 271
and also to identify which management practices are already found in use by the organization, so that this knowledge can be stored, nurtured, and disseminated in an equitable manner. Due to their importance, initiatives in KM must be continually verified in order to evaluate whether they are effectively moving toward attaining their objectives for success. Measurement procedures must include not only how the organization quantifies its knowledge capital, but also how its resources are allocated in order to nourish its growth. However, knowledge is very difficult to measure, due to its intangibility, according to Chua and Goh (2007). There exists recognition of the necessity of understanding and measuring the activity of KM in such a way that organizations and organizational systems can achieve what they do best and also that governments can develop policies to promote the benefits obtained with such practices, according to OECD (2003) — Organisation for Economic Co-Operation and Development. Among the various categories of investments related to knowledge (education, training, software, Research and Development, and among others), the management of knowledge is the least known, as much from the qualitative point of view as from the quantitative, as well as in terms of financial costs and returns (OECD, 2003). With all the questions raised in the above paragraphs, this chapter seeks to call attention to an emerging problem which recent studies of the phenomenon of knowledge sharing have not sufficiently addressed: the difficulty and the efforts of organizations in identifying their knowledge centers, in developing and implementing KM projects, and in utilizing them effectively. Thus, the objective of this chapter is to propose a framework to audit the present state of an organization’s processes and activities and to identify which ICTs are supporting these initiatives, with the intention of diagnosing its real need for KM. 2. Defining Knowledge Knowledge, according to Aurum et al. (2008), can be defined as a “justified personal belief” that increases an individual’s capacity to take an effective action. Knowledge, according to the authors, is defined as awareness, or familiarity acquired through study, investigation, observation, or experience over time. Knowledge, in contrast to information, is related to action. It is always knowledge with an end. Knowledge, like information, relates to meaning. It is specific to the context and it is relational, according to Albino and Reinhard (2005). Davenport et al. (2003) define KM as a collection of processes which govern the creation, dissemination, and utilization of knowledge to reach fully the objectives of an organization.
March 15, 2010
272
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
According to these authors, data are a conjunction of distinct and objective facts (attributes or symbols) relative to events. In a business context, data can be described as structured records of transactions. Information is data endowed with meaning within a context. In the business context, information can be described as a term which permits decision making and execution of an action, due to the meaning it has for that company. Knowledge is derived from information, in the same way that information is derived from data. Simply stated, Davenport et al. (2003) consider that an individual generates knowledge based on the interaction of a conjunction of information obtained externally to him/her, but also with the knowledge and information already in his/her mind. The construction of knowledge is a multifaceted effort, assert Albino and Reinhard (2005). Simply stated, it requires a combination of social and technological actions. In Fig. 1, there can be seen a model of KM. For a company to build capability for strategic knowledge, it is proposed that at least four components must be employed: knowledge systems, computer networks, knowledge workers, and organizations which learn. Nonaka and Takeuchi (1997) identified two basic types of knowledge, as shown in Fig. 2 and summarized as: • Tacit or implicit knowledge. Personal knowledge, incorporated into the actions and experiences of individuals, and specific to the context. Since it involves intangible values, ideas, assumptions, beliefs, and expectations, as well as a
Knowledge systems
Computer networks
Capture systems Data banks Decision tools
Local Corporative External
Learning organizations
Knowledge workers
Collaboration Training Ethos
Key person Skills Meritocracies
Figure 1. A KM model.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 273 Explícit (paper or software) Current knowledge
Represent
Implícit (human brain) Understand Action
New knowledge
Validate capture
Hypothesize
Innovation
Source: Nonaka and Takeuchi (1997). Figure 2.
KM cycle.
particular form of execution of activities, it is a type of knowledge difficult to formulate and communicate. • Explicit knowledge. Knowledge articulated in formal language, built into products, processes, services and tools — or recorded in books and documents, systematized and easily transmitted — including by grammatical statements, mathematical expressions, specifications, manuals, periodicals, and so on. As defined by Stewart (2003): “Tacit knowledge is not found in manuals, books, data banks or archives. It is manifested preferably in oral form.”
Thus, tacit knowledge, also according to Stewart (2003), is disseminated when people meet and tell stories, or if they make a systematic effort to uncover it and make it explicit. 2.1. KM: Principal Factors Knowledge Management promotes an integrated approach to identifying, capturing, retrieving, sharing, and evaluating an enterprise’s information assets, asserts Akhavan et al. (2005). These information assets may include databases, documents, policies, procedures, as well as the uncaptured tacit expertise and experience stored in individual’s heads. According to Keyes (2006), KM, implemented by and at the organizational level and supporting empowerment and responsibility at the individual level, focuses on understanding the knowledge needs of an organization and the sharing and creation of knowledge. The main purpose of KM, says Keyes (2006), is to connect people.
March 15, 2010
274
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
People (work force)
K
nowledge
Organizational processes Technology (IT Infrastructure)
Source: Awad and Ghaziri (2004). Figure 3.
Juxtaposition of KM factors.
The basic tripod of KM is composed, according to Awad and Ghaziri (2004), of the juxtaposition of three basic factors: people, information technology (IT), and organizational processes (Fig. 3). One of the three legs of KM initiatives, IT, brought great benefits to organizations. New technologies for communication with great bandwidth, cooperative and remote work, objects and multimedia enlarged the informational environment, and today there exist innumerable tools to facilitate or support current KM projects, according to Albino and Reinhard (2005). Technologies like corporate knowledge portals (CKP), knowledge bases and maps, discussion software and electronic chat, mapping of tacit and explicit knowledge, data mining, document management — among other technologies — are already available and are offered by various manufacturers. A basic taxonomy for KM tools can be seen in Fig. 4. According to Serrano Filho and Fialho (2006), the conceptual structure in Fig. 4 includes the following phases relating to the life cycle of KM: creation, collection or capture, organization, refinement, and diffusion of knowledge. The outermost layer of Fig. 4 represents the organizational environment — technology, culture, consumer and customer intelligence, metrics, competition, and leadership. In this form, the structure greatly influences how the organization develops and implements its KM life cycle, which Awad and Ghaziri (2004) assert, can also be defined as a KM process. Also according to the authors, the final step is the maintenance phase, which guarantees that the knowledge disseminated will be accurate, reliable, and based on the standards of the company defined a priori. In summary, the principal topics of this structure are: • Acquire: The act of prospecting, visualizing, evaluating, qualifying, triaging, selecting, filtering, collecting, and identifying. • Organize/Store: The act of making explicit, analyzing, customizing, contextualizing, and documenting. • Distribute/Share: The acts of disseminating, sharing, and distributing.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 275
Leadership
Create/Acquire
Metrics
Knowledge organization
Apply
Organize/ Store
Culture
Distribute/ Share
Technology
Processes Facilitators
Source: Adapted from Nogeste and Walker (2006), p. 9. Figure 4. Taxonomy of KM tools.
• Apply: The act of producing and using. • Create: The act of evolving and innovating. 2.2. KMS Life Cycle The construction of KM can be seen, according to Serrano Filho and Fialho (2006), as a life cycle that begins with a master plan and a justification and ends with a structured system for attaining the requirements of KM for the whole organization. A knowledge team representing the ideas of the company and a knowledge developer with experience in the capture, projection, and implementation of knowledge guarantees a successful system. However, before construction of a KMS, there becomes necessary, according to Tiwana (2002), the definition of the principal sources from which flow the knowledge to form the system. Thus, three basic steps are involved in the process of knowledge and learning. In summary, these three stages comprise: • Aquisition of knowledge. The process of development and creation of ideas, skills, and relationships. • Sharing of knowledge. This stage comprises dissemination and makes available that which is already known. This focus on collaboration and on collaborative support is the principal factor, which differentiates KMS from information systems.
March 15, 2010
276
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al. Databases and capture tools
Acquisition
Elements of knowledge utilization and basic technology support
Shar ing
on at i iliz Ut
Databases
· · · ·
· · ·
·
Browsers Web pages Document distribution systems Collaboration tools
· ·
Sharing tools Collaboration tools Communication Links Networks Intranets
Source: Tiwana (2002) p. 72. Figure 5.
Stages of utilization of knowledge and their IT functionalities.
• Utilization of knowledge. The utilization of knowledge gains prominence when learning is integrated into an organization. Any knowledge which is available and systematized in the organization can be generalized and applied, at least in part, to a new situation. Any available computational infrastructure which supports these functions can be utilized. Thus, using this technological approach, these three stages and their functionalities of IT are represented in Fig. 5. It can be seen that this figure is a simplification of Fig. 4. According to Tiwana (2002), these three stages need not be in sequence. In some situations, they can occur in parallel.
3. Auditing KM Knowledge Management, according to Keyes (2006), offers a methodology for creation and modification of processes to promote the creation and sharing of knowledge.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 277
However, an understanding of what constitutes organizational or corporate knowledge is still slow, even in large Brazilian organizations (E-Consulting Corps, 2004). According to Akhavan et al. (2005), even among leading companies in the market which adopted and implemented KM, a large proportion either failed in the process or did not reach the expected success. According to Delgado et al. (2007), many of KM initiatives fail and most projects are abandoned because of the use of an inappropriate methodology. This led, according to Akhavan et al. (2005), to the perception that KM initiatives would represent a high-risk venture. Taking into consideration the questions presented above, the KM proponent or professional should always attempt to evaluate the current state of the organization before initiating a KM program. Proceeding in this manner, the strategy of KM projects will be based on solid evidence of the current state of KM activities and processes and, from that point on, the best manner of efficiently implementing KM can be defined. Thus, there will be, according to Delgado et al. (2007), a solid base to determine exactly why, how, and where beneficial results can be obtained. According to Handzic et al. (2008), before practitioners embark on the development of a KM initiative, they need to understand the key elements associated with KM and their inter-relationships. They also need to analyze the ways by which knowledge is managed by the organization and the degree to which current practices address the goals of the organization. The deficiencies or gaps detected from such an audit can then lead to the development of a KM initiative aimed at supporting work more effectively thus ensuring that the goals are well met. Thus, an audit should be the first phase or stage of a KM initiative and should be utilized to provide a complete investigation of information and knowledge policies and their structure. A complete evaluation should analyze the organization’s knowledge environment, its manner of sharing, and its use. This process, assert Chua and Goh (2007), also permits the diagnosis of the behavioral and social culture of the people in the organization through an investigation of their perceptions regarding the effectiveness of KM. Therefore, any instrument for evaluation and auditing of KM should include the following areas of investigation (Handzic et al., 2008): • • • • • • • •
Evaluation of intellectual assets; Knowledge as a strategic asset; The collaborative environment; Culture of internal learning; Culture of information sharing; Importance of the process; Structure of communication; Motivation and rewards initiatives.
In conclusion, Keyes (2006) asserts that the most important characteristic to consider when defining an audit and evaluation instrument is whether the measuring
March 15, 2010
278
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
process shows whether knowledge is being shared and utilized. To this end, the evaluation must be linked to the maturity of the KM initiative, which has a life cycle which progresses through a series of phases. 4. Framework for Assessment and Audit The KMAuditBr instrument is a tool designed to help organizations perform an initial high-level evaluation of the state of a KM process within the organization. The objective of this instrument is to provide, likewise the KMAT tool, a qualitative approach for the evaluation of KM activities and processes internal to the organization (Chua and Goh, 2007; Jin et al., 2007; Nogeste and Walker, 2006). Upon completing all the items in the instrument, the organization will be able to have a panoramic view of the areas or topics which require greater attention, as well as to identify the KM practices in which the company already shows excellence in execution. 4.1. Structure of the Instrument The structural model of KMAuditBr is based on the KM model shown in Fig. 4, which proposes four facilitators (leadership, culture, IT, and metrics), which can be utilized to nourish the development of organizational knowledge throughout the KM life cycle. This model places the principal activities and the KM facilitators within a single dynamic system, according to Nogeste and Walker (2006). Each of the parts of the instrument represents a grouping of questions which permit not only evaluation of the state of KM practices in relation to the model, but also collection of data for evaluation of its performance, and thereby the establishment of benchmarking with other organizations. The basic architecture of the auditing and evaluation process can be seen in Fig. 6. Based on the concepts discussed by Keyes (2006) and by Nogeste and Walker (2006), and understanding the concepts presented in the KMAT in summary, this instrument has the following objectives: • Permit an initial high-level evaluation of KM in organizations; • Evaluate the state of the KM process within the organization; • Provide a qualitative approach to internal KM activities and processes through proportional measurement; • Obtain a panoramic view of the areas or topics requiring greater attention; • Identify the KM practices in which the company already presents excellence in execution; • Permit external benchmarking with companies in the same line of business.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 279 KMAuditBr
Opportunities and strengths
Current activities and processes
Knowledge management components
Organizations
Organizational strategy Communication and collaboration infrastructure
KMAuditBr Other questions
Benchmarking with the sector
Internal evaluation
External evaluation
Knowledge base
Figure 6.
Stagnant
24
Model of KM architecture.
48
Figure 7.
Refine and continue
Prioritize and select
Initial
72
96
120
State of KM processes and activities.
At the end of the process, the instrument permits positioning of the true state of the organization in relation to KM processes and activities in operation (Fig. 7). Four states were initially thought of: (i) Stagnant (insignificant or basic number of processes, or none); (ii) Initial (where KM activities and processes are few);
March 15, 2010
280
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
(iii) Prioritize and select (various procedures are in operation, including those with IT support for carrying them out, but without coordination or a coherent plan); and, last; (iv) Refine and continue (excellent relationship between KM processes, project coordination, and the use of IT. Initiation of quality control for continuous refinement). Finally, after carrying out evaluations in organizations in various sectors and areas of activity, create a knowledge base of evaluations performed and make it available through free access to the network for all participants in the process. In this manner, with the initial information utilized as a knowledge base, carry out collaborative benchmarking between participating organizations so as to create a structure where a group of companies shares knowledge about a certain area of activity, with the intention of improving themselves through mutual learning. 4.1.1. Initial data In the initial part of the instrument, information is collected regarding the organization (Organizational Characteristics). Also collected is information on the respondent to the questionnaire (Individual Characteristics), questions relative to her or his perception of KM (necessity of KM), as well as whether there exists within the organization, to the respondent’s knowledge, investments in KM (specific sectors or areas, level of investment, etc.) This initial information, along with other information throughout the questionnaire, is utilized to generate a database permitting the development of collaborative benchmarking among organizations. In this type of evaluation and comparison, what is sought is the creation of a structure where a group of companies share knowledge about a certain activity, with the objective of improving themselves through mutual learning. 4.1.2. Evaluation of strengths and opportunities The second stage of the questionnaire concentrates on concepts presented by Nogeste and Walker (2006) and permits a deep analysis of KM activities in organizations. This stage is based on five sections, each one comprising a wide spectrum of KM activities. In summary, this stage of the questionnaire permits examination of the following items: • Process. The KM process includes the steps of action that an organization utilizes to identify the information it needs and the manner in which it collects, adapts, and transfers this information throughout the organization (verification of the flow of information).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 281
• Leadership. Leadership practices include questions of strategy and the way the organization defines its business and utilizes its knowledge assets to reinforce its principal competencies. Knowledge management should be directly related to the manner in which the organization is managed. • Culture. Culture reflects how an organization facilitates learning and innovation, including the way it encourages its workers to construct a base of organizational knowledge in such a manner so as to add value to the customer. In some organizations, knowledge is not shared because the rewards, recognitions, and promotions go to those who possess knowledge and not to those who share knowledge. In this situation, the workers do not have the habit of sharing, to the extent that they do not understand that what they have learned could be of value to others. Thus, they do not know how or with whom to share knowledge. • Technology. Practices with respect to technology are concentrated on how an organization equips its members to facilitate communication among them, such as the existence or not of systems utilized to collect, store, and disseminate information. The great danger is in overestimating or underestimating investment in technology (Jin et al., 2007). • Measurement. Measurement practices include not only how an organization quantifies its knowledge capital, but also how its resources are allocated to foster its growth within the organization. Since it is intangible, organizational knowledge is very difficult to measure. Traditional accounting principles do not recognize knowledge as an asset. One of the problems is that organizations see knowledge as one of their most important assets, but on the balance sheets it is still carried as an expense and not as capital (Keyes, 2006). As presented, the objective of this second stage is to assist organizations to evaluate themselves and to verify where their strengths, weaknesses, and opportunities are located within the KMS model of Fig. 4. 4.1.3. Evaluation of activities and processes in operation The third and final stage of the questionnaire permits an evaluation of activities and processes in operation in the organization, which contemplates the stages of utilization of knowledge as shown in Fig. 5 (acquisition, sharing, distribution, and utilization). The items analyzed in this stage of the questionnaire permit observation of which type of IT functionality is supporting or should support efforts at KM. Besides this, it also permits the evaluation of which IT structure is in existence. According to Deng and Hu (2007), KM should be supported by a complex of technologies for electronic publishing, indexing, classification, storage, contextualization, and information recovery, as well as for collaboration and application of knowledge. To account for the different needs for applications for KM, this study took as a base the architecture of the KMS delineated by Lawton (2001) and detailed in Fig. 8.
March 15, 2010
282
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
Application tier
Competitive intelligence
Best-practice systems
Interface: : point of entry and exit of knowledge KM services
Product development
CRM
Knowledge portal
Discovery of data and knowledge
Collaboration services
Corporate taxonomy
Experts network
Knowledge map
Document and content management
Knowledge repository
Low-level IT infrastrucuture
E-mail, file servers and internet/intranet services
Information and knowledge sources
Word processor
Database
Electronic document management
Electronic mail
Web
People
Source: Adapted from Lawton (2001), p. 13. Figure 8.
Model of KM architecture.
According to Lawton (2001), Knowledge management is not simply a single technology, but rather a complex composed of tools for indexing and classification, and mechanisms for information recovery, in conjunction with methodologies designed to obtain the desired results for the user (p. 13).
The principal available technologies permit, according to Deng and Hu (2007), that knowledge workers work with content and workflow management and that they categorize knowledge and direct it to the people who can benefit from it, besides numerous options for improving customer relations management (CRM), search and discovery of knowledge, and streamlining the business process, as well as tools for collaboration and work in groups, among other objectives. Thus, an architecture such as that in Fig. 8, according to Deng and Hu (2007), synthesizes the IT needs for each of the stages of KM. In summary, the components of this architecture of KMS are the following: • Sources of explicit knowledge. The bottom tier of the architecture contains the sources of explicit knowledge. Explicit knowledge is located in repositories such as documents or other types of knowledge items (for example, such as e-mail
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 283
• • • • •
messages or records in data banks). Standard editing tools (such as text editors) and Data Bank Management Systems (DBMS) give support to this first tier. Low-level IT infrastructure. File servers and e-mail programs, besides intranet and internet services, give support to the low-level IT infrastructure tier. Document and content management. Document and content management systems are the applications which maintain the knowledge repository tier. Corporate taxonomy. Knowledge needs to be organized in accordance with the context of each organization, based on an organizational taxonomy which creates a knowledge map supported by classification and indexing tools. KMS. In this tier, the tools — which support this level — are the knowledge discovery systems and those which offer collaboration services. Interface and application tier. Distribution of knowledge in the organization can be effected through portals for different users and applications such as distance teaching (e-learning), competency management, intellectual property management, and CRM.
In conclusion, the objective of this third stage of the questionnaire is to assist organizations to evaluate themselves and verify which are the KM activities and processes in operation and how IT is supporting them, within the KMS model as shown in Fig. 5, utilizing as benchmarking the KMS architecture as shown in Fig. 8. 5. Research Methodology Exploratory research was undertaken with the purpose of verifying the feasibility of the instrument developed, thus generating a conjunction of indicators with regard to the KM and indicating the positioning of the organizations in a quadrant with four possible situations: initial, stagnant, select and prioritize, or refine and continue. An initial version of this instrument was made and submitted to a pilot test between December 2006 and March 2007, and applied to some organizations from Portugal, Spain, and Brazil. This first application has generated important information to the development of new versions, then reaching the current instrument structure. The data survey here presented belongs to the third version of the instrument, accomplished between September 2007 and May 2008. The research was answered mostly by MBA students and a group of invited companies established in the interior of the S˜ao Paulo state, operating in diverse sectors. 5.1. Results and Analysis Given the complexity of the instrument used, as well as the difficulty to convince the organizations of participating in the research, a sample with 81 participants was obtained, selected among 120 applied questionnaires. These data were compiled in order to generate a knowledge base on the use of KM by Brazilian organizations. The data, after the period of one year, will become available to all participant
March 15, 2010
284
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
companies, creating a collaborative benchmarking, thus representing the second stage of this research project. In the following sections, a summary of the results is presented, obtained by applying the research instrument. 5.1.1. Characterization of the organizations and the individuals In this item of the questionnaire, some information was collected with regard to the organization and the questionnaire respondents. Also, there is information on the needs pointed out by the respondents, concerning the KM. In majority, 69% of the respondents belong to the industry area, and the companies operate internationally (52%). Following that, are companies operating domestically (33%). With regard to the number of employees, 44% of the researched companies have more than 500 employees. Concerning the business figures, 45% of the sample present figures over R$50 million, which, according to the BNDES (2008), classifies the organizations as large companies. Middle-sized companies represented 39% of the sample (business figures of up to R$50 million/US$ 29 million). In the sample, it is also shown that 16% of small companies with business figures of up to R$2 million (US$ 1,100,000). Most of the respondents were male (65 participants), holding specialization courses or MBAs (50%). Seventeen percent of the respondents hold a position of a manager or an administrator, 16% are general managers of the organization, and 14% are in the area of leadership or supervision. There are several other denominations for the jobs or functions developed — such as analysts, engineers, editors, etc., which account for the large amount of others (49%), obtained. With regard to the experience time in the job, 45% of the respondents have been working in their function from 1 to 5 years; 28%, between 5 and 10 years, and 16% for more than 10 years. Eleven percent of the respondents have been executing their function for less than 1 year. Of all the participants, 54% work at the organization’s operational level, and 27% work at the managerial or tactic level. Nineteen percent work at a strategic level. 5.1.2. The need for KM The results presented in this section have the purpose of showing the landscape, the use, and the reality of the KM by the research participants. The first thing requested from the respondents was their opinion about KM. The respondents stated that KM is vital to their business (43%), and also that KM can help the company to organize better its information (29%). As for 18% of the sample, KM is an element that modifies the manner the organization undertakes its businesses.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 285
It is important to highlight that, despite the KM boom in Brazil from the year 2000 onward, 7% of the respondents are still unaware of what KM is, and 2% has never heard about it. Another interesting aspect in this stage was the obtainment of data with regard to the effective use of KM in the organizations, and whether they are investing in initiatives of this kind. It was observed that, in 38% of the sample companies, there are already KM initiatives, and in 21% of them, certain initiatives are already planned. Comparing the result obtained from this study with a large research carried out in 200 companies by the HSM Management magazine in 2004, an improvement in awareness by the Brazilian organizations concerning the KM role was verified, which is reflected in the number of companies which have already adopted some of its practices (E-Consulting Corps, 2004). From those which have not yet adopted, a significant number intended to do so. Nevertheless, when the respondents were asked about their companies having a specific sector for KM initiative management, 53% answered that they did not know. In 43% of the cases studied in this paper, there is already a specific area to KM, reinforcing the fact that KM initiatives have been increasing in Brazil. In Fig. 9, the amount of investments in KM is shown. In 59% of the cases, the respondents did not know or were not willing to disclose such an amount. However, in 18% of the cases, the volume goes up to US$120,000 and in 14%, the investment is more than US$600,000. According to the other respondents, the investments in KM can reach US$300,000 (5%) and between US$300,000 and US$600,000 (4%). Some questions were created in order to obtain more information about the KM implantation process in the researched companies. The respondents would describe the main difficulties to apply KM initiatives to the organizations. The answers, shown visually in Fig. 10, indicated that the main factors are: first (47%),
Figure 9.
KM budget.
March 15, 2010
286
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
Figure 10.
Difficulties in having KM.
Figure 11.
Meaning of KM.
the organization culture does not stimulate the knowledge sharing. Second (25%), people are significantly unaware of the subject. Third (13%), there is a lack of adequate technology. Regarding the meaning of KM, the research demonstrated — as shown in Fig. 11 — that 49% of the respondents see KM as a modeling of corporate processes from the knowledge generated. That is, the KM is perceived as the structuring of organizational activities, thus composing, in fact, a corporate management system. The respondents see KM as an information management philosophy in 17% of the cases, or only as a technology that allows for KM in 12% of the cases. That is, it
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 287
is noted, in most of the answers, that KM is still realized only either as a technology or a corporate management system. A reasonable percentage nevertheless indicates KM as a strategy or a means by which companies can obtain competitive power (16%), in addition to effectively understanding that KM initiatives need an organizational policy composed by systems, a cultural policy of corporate proportions, and among other initiatives that allow for KM, in 6% of the answers. In comparison with the research performed in 2004 by the HSM Management magazine, the global view for KM has been improving inside the Brazilian companies in the last few years, despite uneven perceptions. When asked about the impact of KM on organizations, 61% of the respondents indicated that it will bring a more consistent and optimized development of the collaborators. Other positive impacts refer to the fact that, by adopting the KM practice, this strategy will dictate the companies’ survival capacity (that is, longevity) in 19% of the answers. Also, they point out that the companies will be the winners, in 17%. For the question in which the respondents could indicate the possible benefits obtained from the implantation of KM initiatives, 26% of them pointed out that the most significant one would be the better use of knowledge, as shown in Fig. 12. Following that, 21% of the respondents indicated that the benefit would be connected with a better time-to-market and, as a consequence, with an improving capacity to make decisions efficiently. The respondents also stated that the optimization of the processes in 20% is an evident benefit, as well as the cost reduction (12%). As shown in Fig. 12, questions such as differentiation from other companies (10%) and income increase (8%) were less considered. In the instrument utilized, it was also important to note the successful critical factors in KM initiatives. As shown in Fig. 13, the first point is clear objective
Figure 12.
Benefits in adopting KM.
March 15, 2010
288
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
Figure 13. Actions to facilitate KM implantation.
communication with a significant 35% of the answers. They demonstrate that the structuring of an organizational communication plan focusing on the transmission of information on the KM initiative to the collaborators is an essential process. It is concluded from this issue that clear objective communication and adequate publication of KM initiatives will be important to determine a successful project. Following that, the respondents indicated another noteworthy aspect: training and cultural awareness of collaborators, in 24% of the answers (see Fig. 13). It may indicate that the resistance in adopting KM procedures in the organizations is a result of inadequate organizational culture — or non-applicable to the competitive environment (collaborative and competitive at the same time). It can be then inferred that it is fundamental that the education in KM — or cultural awareness — must be applied prior to the training, in order to establish the absorption process for all the KM steps: creation, collection or capture, organization, refinement, and diffusion of knowledge. The third aspect concerns senior management support. It is when the KM initiatives are accomplished based on the involvement and commitment by senior management. It can be noted in the answers that it is essential to show the KM activities according to the point of view of the added value and the probable results for the entire organization. It is quite a motivating aspect, provided senior management offers its support. 5.1.3. Use of ICT In this section of the questionnaire, it is verified which ICT instruments are mostly employed on a daily basis by the research participants, in order to disseminate knowledge. E-mail (26%) was highlighted as the most employed on a daily basis, as shown in Fig. 14. It is probably due to its easiness and simplicity.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 289
Figure 14.
Most employed ICT instruments on a daily basis.
Figure 15.
KM Functionality.
Other noteworthy instruments are internet and intranets (25%), corporate portal (14%), and brainstorming, representing 10% of the answers. Electronic Document Management (EDM) is also important in the dissemination of explicit and documented knowledge (9%). In the conclusion of the descriptive part, the respondents were requested to point out the main functionality of ICT instruments. The answers showed that the ICT is used to improve the decision-making process (32%), in the first place. Second, to stimulate innovation (25%) and, third, to increase individual production (23%) (see Fig. 15).
March 15, 2010
290
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
6. The Audit and Diagnosis Processes In order to evaluate and diagnose KM activities and processes in progress, some data and elements were intentionally chosen. These data and elements — when in group — will refer to 14 organizations. This intentional choice was made because: the researchers had easy access to a limited number of organizations; the organizations easily allowed for the diagnosis, and posterior debates were carried out on the results obtained from the managers; in loco confirmations were obtained of whether KM initiatives were being developed by the companies. In order to keep secrecy and prevent companies from being identified, they will be referred hereafter as from “Organization 1” to “Organization 14”. The KMAT instrument, developed by APQC and Andersen Consulting, was partly utilized to carry out the section relative to the diagnosis (Nogeste and Walker, 2006). This evaluation and diagnosis instrument was constructed based on the APQC’s organizational knowledge management model, whose structure is composed by four facilitators, who favor the KM process. In this model, the questionnaire is divided into five parts. For each part, a subset of items and information graded by a five-point Likert scale is defined. A general sum of the scores is generated with a value ranging from 24 to 120 possible points. Finally, the answers are classified, and a relation between the total score for the answers and the maximum number of possible points is established. The respondents themselves can evaluate their scores in the KMAT, and make “comments for future actions”, by suggesting alternatives or outlining the steps which could cause improvement in the score obtained. In this paper’s “extended” version, named KMAuditBr, the use of ICT as an auxiliary tool to the KM dissemination activities is analyzed, based on the discussion carried out by Deng and Hu (2007). In addition, a high-level audit for the processes and activities in progress at the company is performed in KMAuditBr, enabling guidance towards future organizational decisions concerning KM projects and initiatives. The audit is also carried out by means of questions for each item (processes and activities), using a five-point Likert scale, from a minimum of 24 to a maximum of 120 points possible. The Likert scale ranges from totally disagree to totally agree, and the respondents shall mark the option that is closest to his/her organizational reality, regarding the activities and processes, which are currently in progress or which had been in progress for the last five years. 6.1. Results Obtained from the Diagnosis and the Audit The results obtained from data from the companies selected in the intentional sample are shown in Table 1. The variable KMATS represents the mean score sum for each one of the organizations, and the variable KMATC, its correspondent classification
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 291
Table 1.
Organization 1 Organization 2 Organization 3 Organization 4 Organization 5 Organization 6 Organization 7 Organization 8 Organization 9 Organization 10 Organization 11 Organization 12 Organization 13 Organization 14
Score and Classification Calculated. KMATS
KMATC
AUDS
AUDC
65 64 51 45 98 67 85 56 61 45 66 45 39 50
0.54 0.54 0.42 0.37 0.82 0.56 0.71 0.47 0.51 0.38 0.55 0.37 0.33 0.42
74 85 63 56 98 90 100 64 92 87 103 60 58 68
0.61 0.70 0.53 0.47 0.81 0.75 0.84 0.54 0.77 0.73 0.85 0.50 0.49 0.57
achieved by using the KMAT principles. The variable AUDS corresponds to the instrument audit process, and the values represented below correspond to the mean of each organization with regard to the processes and activities in progress. The variable AUDC means its correspondent classification. At the end of the process, the instrument enabled the positioning of the actual organizational status against KM processes and activities (see Fig. 7). The following indices were taken into consideration for this purpose: (a) stagnant: from 24 to 47 points; (b) initial: from 48 to 71 points; (c) select and prioritize: from 72 to 95 points and (d) refine and continue: from 96 to 120 points. By using the four possible statuses for the result, the following diagnoses were obtained for the sample companies: • • • •
The Organizations 4, 10, 12, and 13 are in the Stagnant status. The Organizations 1, 2, 3, 6, 8, 9, 11, and 14 are in the Initial status. The Organization 7 is in the Select and Prioritize status. The Organization 5 is in the Refine and Continue status.
It can be inferred, in general, that most sample companies (57%) are in the initial status, in which KM activities and processes are scarce. Twenty-eight percent of the companies are in the stagnant status. Regarding Organization 7, it can be inferred that several procedures are in progress, and IT practices are helping in their accomplishment but with low coordination or with a lack of a united process. This organization would fit the select and prioritize standard. Organization 5, by its turn, was the only one to fit into the Refine and Continue status, in which there is an optimal relation among KM processes, coordinated
March 15, 2010
292
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
projects, and the use of IT. At this status, the company initiated — or is establishing — quality control standards, is looking for continuous improvement. 6.2. Analysis of the Diagnosis and Audit Results The results obtained from the audit confirmed the Organizations 5 and 7 diagnoses, since both scored high in AUDS (98 and 100, respectively). It was noted that, in the in loco audit, both present a specific area for the implantation of KM initiatives, in addition to the specific budget for such a purpose. It was verified conversely, in the four companies diagnosed according to the stagnant status, that there is neither investment in KM nor specific area to its development. As shown in Table 1, the scores achieved by these companies in the audit are also low, indicating few or no initiatives to stimulate the generation of better ways to improve the tasks, methods to improve knowledge on the clientage, the use of instruments, and initiatives to discover and register collaborators’ skills and competences, among other things. Some of the eight companies classified within the initial status had begun or are beginning performance and motivation processes for the collaborators, creation and incentives to diffuse better ways to carry out the tasks, in addition to applying methods to gather and keep both practice and know-how, which are important aspects for the KM initiatives. By assessing the processes and activities in progress at the companies classified in the initial status, it was verified that some of them (such as Organization 11, for example), despite low KMAT score, obtained high scores in the audit. It is observed that, through in loco evaluation, specifically this company — a multinational — currently presents a quality control area and KM projects are underway. This is a demand from the company headquarters located in Germany, and there is a small department in charge, with a defined budget. Other organizations classified as initial, despite allegedly having KM initiatives and projects, actually present initiatives for information management, area integration, or other processes within this line. Such companies are still developing culture in KM, and they have a long way ahead. 7. Concluding Remarks According to Akhavan et al. (2005), organizations which succeeded in KM share common characteristics. These aspects range from technology framework to a strong belief in knowledge sharing and collaboration. Recent research has demonstrated that the organizations looking for initiatives in KM must evaluate their current status, as well as establish a set of indicators to determine which types of KM efforts — originated from strategic planning — will probably succeed. The KM audit results can be therefore utilized to plan inevitable
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 293
changes in technology, processes, and organizational culture, which follow the investments in KM. According to Keyes (2006), the organizations which do not assess their own KM status may implement technologies and concepts in unfruitful or untargeted areas. The KMAuditBr instrument presented in this study was created with the purpose of helping the organization make an initial high-leveled self-evaluation on how knowledge is being managed. The purpose of this instrument is to guide the organizations in the areas demanding major consideration, as well as support them in the identification of KM activities and processes underway. This way, it will stimulate the organizational knowledge development. The support and use of ICT functionalities will also be evaluated by the instrument. 7.1. Research Limitations The study presented in this chapter presents some limitations. The first is that most of the respondents belong to companies from the same sector, industry, with a few participants from other areas like commerce, services and banks, and among others. Another limitation is that the sample also did not include public sector companies or organizations. Yet another clear limitation was that of all the participating companies, we were able to conduct an evaluation that provided us with data regarding the diagnosis and audit in only 14 of them. For such an evaluation, only a minimum number of questionnaires were applied in each organization and selected for obtaining initial data. 7.2. Future Research Directions We ascertained that for a more in-depth analysis of the organizations and sectors, it will be necessary to apply a larger number of questionnaires and also to have the participation of a larger number of companies from the most diverse areas of operation. However, since the issue of KM and its initiatives is still considered strategic, this will be a difficulty that must be overcome, since we detect a certain restriction in access to information. According to the structure shown in Fig. 6, the model projects the creation of a knowledge base aimed at storing all audits and diagnoses conducted and making them available for consultation over the Internet for all organizations participating in the process. This base and its access structure are still in the development and construction phase. An infrastructure is being built in the Knowledge Management Technology Laboratory–LTGC, in the Computer Science Department at UNESP, Bauru Campus, to support this environment. The collaborative benchmarking process will thus be the next phase of research to be implemented, such as the intent
March 15, 2010
294
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
to create an infrastructure where organizations can share their best practices and knowledge with companies from the same sector. The KMAuditBr is not supposed to be flawless. Nevertheless, similar to KMAT tool — from which this instrument is originated — it is an initial step to generate profound and measurable studies on KM. This chapter is intended to demonstrate the possibilities of this instrument as a continued evaluation tool, thus enabling its constant improvement.
References Akhavan, P, M Jafari and M Fathian (2005). Exploring failure — Factors of implementing knowledge management systems in organizations. Journal of Knowledge Management Practice, 6(1). The Leadership Alliance Inc. Available at http://www.tlainc. com/articl85.html. Albino JP and N Reinhard (2005). A Quest˜ao da Assimetria Entre o Custo e o Benef´ıcio em Projetos de Gest˜ao de Conhecimento, in XI Seminario de Gesti´on Tecnologica, ALTEC Conference Proceedings: Salvador, Brazil. Albino, JP and N Reinhard (2005). Avalia¸ca˜ o de Sistemas de Gest˜ao do Conhecimento: Uma Metodologia Sugerida. In: XIII Simp´osio de Engenharia de Produ¸ca˜ o, S˜ao Paulo, Brazil — XIII SIMPEP Conference Proceedings. Available at http://www.simpep.feb. unesp.br/upload/463.pdf. Aurum, A, F Daneshgar and J Ward (2008). Information and Software Technology, 50, 511–533. Awad, EM and HM Ghaziri (2004). Knowledge Management. USA: Prentice Hall. BNDES (2008). Banco de Desenvolvimento Econˆomico e Social, Porte de Empresa. Available at http://www.bndes.gov.br/clientes/porte/porte.asp. [Accessed on 09/05/08]. Chua, AYK and D Goh (2007). Measuring knowledge management projects: Fitting the mosaic pieces together. In Proc. of 40th Hawaii International Conference on System Sciences. (3–6 January 2007). HICSS, IEEE Computer Society, Washington, DC, 1926. Davenport, TH, L Prusak and HJ Wilson (2003). What’s the Big Idea? Creating and Capitalizing on the Best New Management Thinking. USA: Harvard Business School Press. Delgado, RA, LS B´arcena and AML Palma (2007). The knowledge management helps to implement an information system. In Proc. of ECKM 2007: 8th European Conference on Knowledge Management, 28–34. Deng, Z and X Hu (2007). Discussion on models and services of knowledge management system. In Proc. of ISITAE ’07, First International Symposium on Information Technologies and Applications in Education, (23–25 November 2007). ISIATE, IEEE Computer Society, Kunming, China, 114–118. E-Consulting Corps (2004). HSM Management, 42(1), 53–600. Handzic, M, A Lagumdzija and A Celjo (2008). Auditing knowledge management practices: Model and application. Knowledge Management Research & Practice, 6, 90–99. Jin, F, P Liu and X Zhang (2007). The evaluation study of knowledge management performance based on grey-AHP method. In Proc. of 8th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, 444–449.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 295
Keyes, J (2006). Knowledge Management, Business Intelligence, and Content Management: The IT Practitioner’s Guide. LLC, New York: Auerbach Publications, Taylor & Francis Group. Lawton, G (2001). Knowledge management: Ready for prime time? IEEE Computer, 34(2), 12–14. Nogeste, K and DHT Walker (2006). Using knowledge management to revise softwaretesting process. Journal of Workplace Learning, 18(1), 6–27. Nonaka, I and H Takeuchi (1997). The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. USA: Oxford University Press. O’Dell, C and CJ Grayson (1998). If Only We Knew What We Know: The Transfer of Internal Knowledge and Best Practice. New York: Free Press. OECD (2003). Measuring Knowledge Management in the Business Sector: First Steps. Paris: OECD Publishing. Serrano Filho, A and C Fialho (2006). Gest˜ao de Conhecimento: O Novo Paradigma das Organiza¸co˜ es. Portugal: FCA. Stewart, TA (2003). The Wealth of Knowledge: Intellectual Capital and the Twenty-First Century. UK: Nicholas Brealey Publishing. Tiwana, JA (2002). The Knowledge Management Toolkit. USA: Prentice Hall.
Biographical Notes Jo˜ao Pedro Albino is a Technologist in Data Processing and has a Bachelor’s Degree in Computer Science. He also has a Masters in Computer Science and a PhD in Management. He also earned a post-PhD in Innovation and Technological Management and a post-PhD in Knowledge Management at the Department of Industrial Engineering and Management of University of Aveiro, Portugal. He is a Professor of Information Systems at the College of Sciences of UNESP. His research interests and publications include Computer Science with an emphasis on Information Technology and Knowledge Management. Nicolau Reinhard is a Professor of Management at the School of Economics, Administration and Accounting of the University of São Paulo (USP), Brasil. His research interests and publications include Management of the IT function, the use of IT in Public Administration and Information Systems implementation and impacts. Prof. Reinhard has a degree in Engineering, a PhD in Management, and besides his academic career, he has held executive and consulting positions in IT management in private and public organizations. Silvina Santana is an Assistant Professor at the Department of Economics, Management and Industrial Engineering of the University of Aveiro, Portugal and Guest Professor at Carnegie Mellon University. She is a researcher at IEETA, the Institute of Electronics Engineering and Telematics ofAveiro. She holds a PhD in Knowledge and Information Management and degrees in Electronics and Telecommunications Engineering and Industrial Engineering and Management. She is currently involved
March 15, 2010
296
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
in three European projects (EU FP7), one of them as project leader and coordinator in Portugal. Her main research interests centre on business integration, business models and business processes, organizational learning, knowledge management, eHealth and telemedicine, integrated care, public health, intersectoral partnerships, healthcare organizations management and entrepreneurship.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Chapter 13
Risk Management in Enterprise Resource Planning Systems Introduction DAVIDE ALOINI,∗ RICCARDO DULMIN† and VALERIA MININNO‡ Department of Electric Systems and Automation, Pisa University, P.O. Box 56100, Via Diotisalvi, 2 Pisa, Italy ∗
[email protected] †
[email protected] ‡
[email protected]
Enterprise Resource Planning (ERP) systems are extremely complex information systems, whose implementation is often unsuccessful. We suggest a risk management (RM) methodology supporting the formulation of risk treatment strategies and actions during ERP introduction projects to improve the success rate. In this chapter, first the research context is presented, then the framework and the methodology are illustrated and the main phases of the proposed RM approach are introduced, finally results are discussed. Keywords: Enterprise resource planning; risk management; framework; methodology; information systems.
1. Introduction In the recent years, Enterprise Resource Planning (ERP) systems have received great attention. Nevertheless, ERP projects have often been found to be risky to implement in business enterprises mainly because they are complex and affected by uncertainty. A PMP research (2001) found that the average implementation time of an ERP project is between 6 months and 2 years and the average cost is about 1 million dollars. However, according to the estimation of the Standish Group International, 90% of SAP R/3 projects run late, 34% are late or over budget, 31% are abandoned, scaled, or modified, and only 24% are completed on time and respect the budget. A possible explanation for such high ERP project failure rate is that managers do not take appropriate measures to assess and manage the risks involved in these projects (Wei et al., 2005). However, dealing with risk management (RM) in ERP introduction projects is an ambitious task. ERP projects are highly interdisciplinary, as they affect interdependencies between business processes, software, and their reengineering (Xu et al., 2002). Critical factors include technological and 297
March 15, 2010
298
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
management aspects, both psychological and sociological; moreover, they are often deeply interconnected and have indirect effects on the project. This makes the RM process and in particular risk assessment phases very difficult and uncertain. The main purpose of this chapter is to provide an RM methodology to support managers in making decisions during the life cycle of an ERP introduction project, to improve the success rate. 2. Background Risk is present in every aspect of our life; thus RM is considered a very important task, even if it is often treated in an unstructured way, based only on relevant knowledge, experience, and instinct. All projects involve risk because unexpected events and deviation from the project plan often occur. IT projects, as other kinds of complex projects, may be considered an area suitable for action and potential development of RM practice. 2.1. Project Risk in IT Field Many factors affect IT implementation projects and they can be grouped in different classes; for example (DeSanctis, 1984; Leonard-Barton, 1988; Lucas, 1975; Schultz et al., 1984): (i) Individual factors: such as needs, cognitive style, personality, decision style, and expectancy contributions. (ii) Organizational factors: such as differentiation/integration, extent of centralization, autonomy of unit, culture, group norms, reward systems, and power distributions. (iii) Situational factors: such as user involvement, nature of analyst-user communication, organizational validity, and existence of critical mass. (iv) Technological factors: which include the types and characteristics of technology such as transferability, implementation complexity, divisibility, and cultural content. Other researchers identified similar factors impacting on the successful implementation of IT grouping them into different classes, but everyone has described issues of organizational fit, skill mix, management structure and strategy, software system design, user involvement and training, technology planning, project management, and social commitment. At the project level, IT projects have long been recognized as high-risk ventures prone failure; some of the software project risks are easy to identify and manage, others are less obvious or more difficult to handle. Given the potential costs and losses from failed software projects, researchers and practitioners must continue learning from each other to reduce project failures and develop practices that consistently generate better project outcomes.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 299
Enterprise-wide/ERP projects are among the most critical IT projects and pose new opportunities and significant challenges in the field of RM. ERP systems, as companywide information systems, impact on a firm’s business processes, organizational structure, and existing IT (legacy) systems. So that, RM approaches for ERP system must embrace the software, the business processes, and the project management dimensions. Nevertheless, the adoption of an ERP system can include many potential benefits: • Business Benefits: which include operative efficiency by automation, paperwork, and manual data entry reduction; inventory level reduction; global integration by managing of currency exchange rate, languages, and so on. • Organizational Benefits: improvement of internal IT skills and attitude for change; process standardization; and reduction of staff positions. • IT Benefits: consistent data in a shared database; open architecture; integration of people and data; and reduction of update and repair needs for separate computer systems. As ERP packages touch on many aspects of a company’s internal and external operations, the related investments include not only the software, but also the related services such as consulting, training, and system integration. Consequently, successful deployment and use of ERP systems are critical to organizational performance and survival (Chen, 2001; Markas et al., 2000b). 2.2. Relevance from Literature The importance of an RM approach in an ERP introduction project is recognized both in theory and in business practice. In an extended review of the literature in the ERP field, Aloini et al. (2007) analyzed and classified a number of key contributions to ERP introduction project to define the main issues and research approaches taken, and identify which areas needed ERP RM deployment and most relevant risk factors assessed. The authors underpin that despite the great importance reserved by literature for factors tied up to project management, including RM (Anderson et al., 1995; Cleland and Ireland, 2000) and change management areas, only a few articles dealt explicitly with these topics. Definitely, there are only few academics’ contributions about RM in ERP projects. Furthermore, as for the existent ones, it appears they are mainly concerned with organizational or business impact of ERP systems (Beard and Sumner, 2004) and rarely on RM strategies and techniques or assessment models. In the latter case, they mainly focus on the identification and analysis of risk and just a few of them suggest operative models to support the quantification of the risk in terms of risk analysis and evaluation, or the selection of appropriate treatment strategies; quite never a computer-assisted tool supporting the different stages of the process is carried out.
March 15, 2010
300
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Evidence from literature shows that quite all the analyzed contributions present merely qualitative and not integrated approaches to RM. Authors frequently suggest general frameworks or global approaches to risk management in the ERP fields often derived from IT and RM literature but rarely propose methodologies and tools for the ERP case, which could support decision makers to manage risk in the different stages of the project life cycle. RM phases are usually approached as stand-alone activities without considering the relevant interconnections with the other ones; moreover, transversal phases like context analysis, risk monitoring, and communication are scarcely supported and contributions often remand to more general RM approaches. Finally, they do not deal with the problem of the risk factors interdependence or their relationships with the effects. As for practitioner initiatives, SAP and Baan, along with other ERP vendors, which have certain methodologies, have designed proprietary applications for the needs of their own ERP systems. These RM applications are not generic and thus cannot be used at the implementation of any ERP system. Moreover, they adopt a more technical perspective than an organizational one. These approaches consider some risk factors to support the resource planning, allocation, and time forecasting, especially for the ERP configuration activities. However, they fail to address other important dimensions of project success related to risk mitigation, like the organizational impact or the process reengineering needs. For all the reasons mentioned above, the aim of this chapter is to propose an effective RM methodology for the project of ERP introduction. 3. ERP RM Framework As follows, we first propose of all our definitions of risk, risk assessment, and RM, which will clarify the approach we suggest in dealing with risk during the introduction of ERP systems. Definition 1: Risk is an uncertain event or condition that, if it occurs, has a positive or negative effect on a project objectives (PMI, 2001). Definition 2: Risk assessment is the process of quantitatively or qualitatively assessing risks. It involves an estimation of both the uncertainty of the risk and of its impact. Definition 3: RM is the practice of using RM to devise management strategies and deal with risk in an effective and efficient way. According to this perspective, a general risk management framework can be drawn for ERP projects. It consists of several activities, as Fig. 1 shows. (i) Context Analysis: This aims to define the boundaries of the RM processes: processes which have to be analyzed, desired outputs, performance, etc., to support the definition of the correct risk model approach.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 301
Figure 1. A schematic illustration of a general RM process.
(ii) Risk Assessment: This is a core step of the RM process and includes: (a) Risk identification — which allows the organization to determine early the potential threats (internal and external risk factors) and their impact (effects) on the project’s success. (b) Risk quantification — which aims to prioritize risk factors according to their risk levels and consists of two principal phases: • Risk analysis (or estimation) — provides inputs to the risk evaluation phases for the final quantification. The typical inputs are the occurrence probability of a risk factor, the links (weight) with potential effects, the severity of these effects and eventually the detection difficulty. • Risk evaluation — which defines risk classes. It selects an appropriate and effective risk aggregation algorithm and synthesizes the risk level for each identified risk factor. (iii) Risk Treatment: This targets to the selection of an effective strategy to manage the risks related to the different risk classes identified. RM strategies consist of four classic approaches: the first aims to reduce risky circumstances, the second deals with risk treatment after a risk factor appears, while the third and fourth deal to risk externalization or acceptation. (iv) Risk Control: The final aim of RM is managing the project risk to perform a better control on the project and increase its probability of success. The principal issues of the risk control phase are (a) Monitoring and Review — each step of RM process is a convenient milestone for reporting, reviewing, and action taking.
March 15, 2010
302
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
(b) Communication and Consulting — aims to effectively communicate hazard to the project managers and the people involved into the project to support the managerial actions. Details about these phases in an ERP project will be explained in the following sections. 3.1. Research Methodology The work we present was originated from an extended research including the conceptual development of the methodology and a preliminary empirical test of its validity and suitability in a real context. The research aim was to develop an innovative methodology of RM suitable for ERP introduction projects. According to the general framework presented above, it was divided into three parts: state-of-the-art analysis, methodology deployment, and validation. State of the art analysis assesses the current state of the art on ERP RM and supports the definition of the risk model approach. In particular, a literature review investigating on the existing RM approaches and techniques was performed, and main contributions were analyzed discussing differences, advantages, and disadvantages. The attention then moved on the ERP literature, reviewing most relevant contributions on this topic (130 peer-reviewed articles were collected and 75 selected for the analysis) and identifying the areas which need further deployments. After that, we explicitly focused on contributions dealing with RM in ERP projects investigating on existing RM approaches and techniques both from academic and practitioner world. Methodology deployment aims to develop a suitable model useful for ERP RM. A specific RM methodology for ERP introduction projects was developed, suggesting innovative methodologies and techniques for the different RM phases or adapting the existing ones to the new ERP context. In particular, an extended literature review responded to the need of risk identification focusing on the classification and the taxonomy of the principal risk factors. Then, an overall framework, enumerating risk factors, effects, and macroeffects, was drawn. As for risk quantification, several techniques in risk analysis and evaluation stage were analyzed. After that, an interpretive structural modelling (ISM)-based technique was proposed to model dependences and interconnections among risk factors and between risk factors and effects, to draw a risk event tree (Sage, 1977). A probabilistic network approach was suggested for risk evaluation. Finally, in risk treatment and control phases, potential risk treatment strategies for each risk factor were identified and analyzed using literature analysis, interviews to practitioners, and case studies, in relation to the project life cycle, the risk factor profile, and the impacts on the project. A general roadmap was finally drawn.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 303
Validation aims to preliminarily test and discuss the conceptual validity and applicability of the proposed RM framework in a real context by the use of a number of case studies. The methodology was tested by case studies. It shows evidence from 5 in-depth case studies from multinational firms in different sectors, which were recently involved in a project of ERP introduction. The analysis was based on in-depth interviews, ex post evaluations on the project performance and on an ex post simulation of the methodology. In the following paragraph we illustrate in detail the main phases and techniques of the investigated methodology.
3.2. Context Analysis Context analysis is an important activity, which should be preliminarily assessed in the RM process. Establishing the context means setting boundaries within which risks must be managed and defining the scope, as well as the budget for the following RM activities. The context analysis is essential for the next steps and is functional both to the assessment and the treatment phase, as it can enable a more complete identification, a better assessment, and a more careful selection of a suitable response strategy. In our opinion, the relevant attributes for the analysis are • Detectability of Risk Factors: easy or difficult detection of the occurrence of a risk factor. • Responsibility of Actions: internal or external players. • Project Life Cycle Phases: the phase of the project in which the risk factor is enabled to occur. • Controllability: the possibility of influencing the probability of occurrence of a risk factor. • Dependence: dependence degree of a risk factor from the others. Project modeling tools and techniques are generally useful for this activity, such as project network diagrams, precedence diagramming method (PDM), IDEF3 process modeling, and IDEF0 functional modeling (Ahmed et al., 2007).
3.3. Risk Assessment Risk assessment is the central phase of the RM process. This phase is functional to understand the nature of risk in terms of which factors could impact on project success, the interactions, their probability of occurrence, detection difficulty, and potential impact on the project, to quantify their risky and prioritize them. It consists of two principal issues (Fig. 2): risk identification and risk quantification, which we will separately discuss.
March 15, 2010
304
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Figure 2.
Risk assessment phase.
3.3.1. Risk identification Common RM approaches emphasize the need of identifying “risks” early in the process. Chapman and Ward (2003) assert that a key deliverable for an effective RM is a clear, common understanding of the sources of uncertainty facing the project, and what can be done about them. The real sources of risk are the unidentified ones, so that the identification phase can be considered as an initial risk response action. The identification of sources, effects, and responses can be assessed in a variety of ways, by individual activity or involving other people, including interviewing individuals, interviewing groups, or group processes such as brainstorming and decision conferencing, to stimulate imaginative thinking and draw on the experiences of different individuals (Chapman and Ward, 2003). As for factor identification, checklist approaches, as well influence diagrams, cause-effects diagrams, event or fault trees, can be very effective in focusing the attention of managers. The construction of such checklists and trees can be managed both top-down (form macroproject risk classes to single risk factors) and bottomup (from the effects on the project to the related causes, i.e., risk factors). The first approach can be assisted by guidelines which categorize risks in different project dimensions, such as project life cycle (plan and timetable), project players (both internal and external parties), project objectives and motives, resources, changes in processes and organization structure, which can stimulate and drive managers during the process. The second approach instead needs to start from an overall definition of what project success means in complex projects like the ERP introduction. In this last view, the current state of the art in the ERP field is discussed by Aloini et al. (2007),
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 305
presenting an extended literature review responding to the need of risk identification and focusing on the classification and the taxonomy of the principal risk factors and effects. Adopting the Lyytinen and Hirschheim’s (1987) definition of “failure” and “success” of IT projects, authors suggest a first classification of IT failure: (i) (ii) (iii) (iv)
Process failure, when an IT project is not completed within time and budget. Expectation failure, where IT systems do not match the user expectations. Interaction failure, when user attitudes toward IT are negative. Correspondence failure, where there is not a match between IT systems and the specific planned objectives.
This classification can be useful to identify the project effects and the causes of the failure (risk factors). In the mentioned articles, authors suggest 10 risk effects and 19 risk factors usually happening in ERP projects, as shown in Fig. 3. As for the effects, they mainly concern about budget exceed, time exceed, project stop, poor business performances, inadequate system reliability and stability, low organizational process fitting, low user friendliness, low degree of integration
Figure 3.
Risk factors, effects and effects macro-classes. (From Aloini et al., 2007.)
March 15, 2010
14:45
306
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Table 1.
Risk Factors and Their Frequency Rate in Literature.
ID
Risk factor
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15 R16 R17 R18 R19
Inadequate selection of the ERP package Poor project team skills Low commitment of the top management Ineffective communication system Low involvement of the key users (KU) Inadequate training Complex ERP system architecture Inadequate business process reengineering Bad managerial conduction Ineffective project management techniques Inadequate change management Inadequate legacy system management Ineffective consulting services Poor leadership Inadequate performance of the IT system Inadequate IT system maintainability Inadequate stability and performances of the ERP vendor Ineffective strategic thinking and planning Inadequate financial management
Frequencies √√√ √√ √√ √√ √√ √√ √ √√ √√ √√ √√ √ √ √ √√ √ √ √√√ √
Source: Aloini et al. (2007). Note: The frequency rate is related to the scientific literature.
and flexibility, low strategic goals fitting, and bad financial/economic performances. As for the risk factors, in Table 1, a list of potential elements is reported according to the revealed interest in literature. 3.3.2. Risk quantification Risk quantification aims to evaluate the risk level of the identified factors to synthesize a ranking, which could drive and prioritize the selection of the treatment strategies. In this approach, the definition of risk quantification entails two essential components: uncertainty (i.e., the probability of occurrence of a risk factor, U) and exposure (i.e., the impact or effect of the occurrence of a risk factor on the project, E). Ri = Ui · Ei
(1)
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 307
The Australian Standard (AS/NZS 4360, 1999) distinguishes the approach to risk assessment as follows: (a) Qualitative, which uses words to describe the magnitude of potential consequences and the likelihood that those consequences will occur. (b) Semi-quantitative, where qualitative scales are set with given values. The objective is to produce a more expanded ranking scale than is usually achieved in qualitative analysis, but not to suggest realistic values for risk such as is attempted in quantitative analysis. (c) Quantitative, which uses numerical values (rather than the descriptive scales used in qualitative and semi-quantitative analysis) for both consequences and likelihood using data from a variety of sources. As mentioned before, the risk assessment process consists of two subphases: risk analysis and evaluation. In risk analysis stage, risk factors are analyzed and classified according to the decisional attributes defined in the context analysis phase, such as controllability, detectability, project life cycle, responsibility, and dependence. The output is functional both to the evaluation and to the treatment phase as it assesses a pre-analysis of risk factor profiles and enables a more accurate selection of suitable response actions. Dependence, among the risk factors, in particular, is critic in risk assessment as snowball effect can occur. The Interpretive Structural Modeling (Sage, 1977) technique, as well as other Analytic Network Process (ANP) approaches can be useful for modelling dependencies and connections between risk factors and effects, and to draw a risk event tree. An example of these dependencies is reported in Fig. 4. Risk factors here are drawn according to their degree of dependence (how many factors they are depending on) and driving power (how many factors they lead to). From the left to the right the dependence degree increases. Three macroclasses of factors are visible: factors linked to the project governance, factors in the project, and change management group and BPR-related factors. Two isolated classes are also present: Legacy system and financial management (Aloini et al., 2008). In the risk evaluation phase, a ranking has to be elaborated to assess the priority and severity of each risk factor. Consequences and likelihood and hence the level of risk should be estimated. A comparison between the levels of risk against the preestablished criteria and a balance between potential benefits and adverse outcomes should be done. This process enables one to make decisions about the extent and nature of treatments required and about priorities. A wide dispute still exists in RM literature between those who emphasize a formal quantitative assessment of the probable consequences caused by the recommended actions and comparison to the probable consequences of alternatives, and people who emphasizes perceived urgency (qualitative expert judgments) or severity of the situation motivating recommended interventions.
March 15, 2010
308
14:45
WSPC/Trim Size: 9.75in x 6.5in
D. Aloini et al.
SPI-b778
b778-ch13
Risk factors, effects, and macroclasses of effects. Figure 4.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 309
Figure 5.
Risk matrix.
A variety of techniques supporting the evaluation phase exists, such as statistical sums, simulations, decision trees, expert judgments, multicriteria decision, portfolio approaches, probabilistic networks, etc. The risk matrix (Fig. 5), for example, is one of the most common tools used in the assessment phase. Once the likelihood and impact of a risk factor are qualitatively or quantitatively estimated, it classifies risk factors according to the likelihood and the impact level. Probabilistic networks are more sophisticated approaches to the risk evaluation phase. They can be used when the purpose is to take into account simultaneously the risk factor dependencies, the probability of occurrence, and the impact on the project effects. With estimates of the unconditioned probabilities of the occurrence of each risk factor, a matrix which models dependences among the risk factors and between risk factors and effects, and estimates of the importance (weight) of their potential effect on the project’s success, the probabilistic network can be used to assess a global risk index for those risk factors. This kind of approach is more complex and expensive than the risk matrix in terms of estimation of parameters, modeling of dependences, and evaluation of effect. Without a doubt, risk assessment may be undertaken to varying degrees of detail depending upon the risk, the purpose of the analysis, and the information, data and resources available, but costs and benefit of available techniques should be carefully analyzed before a wide application to the project.
March 15, 2010
310
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Figure 6.
Risk treatment and control.
3.4. Risk Treatment After context analysis and risk assessment in risk treatment and control phases (Fig. 6), an effective strategy has to be adopted and implemented to manage risks. As introduced in Section 3, the goal of this phase is to plan a whole of consistent and feasible actions and relative organizational responsibility to avoid, reduce the likelihood, reduce the impact, transfer, or retain the risk (AS/NZS 4360, 1999). The tangible output is a formal risk management plan (RMP), an additional project management tool to execute a feed-forward (support to project design, actions based on the chance of a risk event), and a feedback (risk mitigation action following risk manifestation) control. Such a plane is not a static document because of unexpected problems and the realization of planned actions and its design must like the trade-off between planned outcomes and their cost of execution, constraints, and available skills. In ERP projects, Risk Treatment involves several interrelated and contextspecific aspects such as technological, managerial, and organizational risk factors, the phase of the projects life cycle in which they occur and have to be managed, the most appropriate strategies, and relative specific responses. 3.4.1. ERP project life cycle: Activities, risk factors, and key actors To design an effective RMP, it is useful to describe the organization’s experience with an enterprise system as moving through several phases, characterized by key players, typical activities, characteristic problems, appropriate performance metrics, and a range of possible outcomes. For a better control, we must consider both these factors by breaking up the typical IT investment phases (system development,
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 311
implementation, ongoing operation) in a more detailed framework and considering them (Markus and Tanis, 2000a; Soh and Markus, 1995): • The necessary conditions for a successful outcome (i.e., high-quality information technology “assets”) are not always sufficient for success. An IT investment on track for success is derailed by an external event or changing external conditions. • The outcomes of one phase are the starting conditions for the next. Decisions and actions in a phase can increase or decrease the potential for success subsequently. • Outcome variables are both the success of the implementation project (no overtime/costs) and, chiefly, of the business results (did the company succeed in realizing its business goals for the ERP project?). In this section, to model the project life cycle, we use the well-known five-phase implementation roadmap of SAP ERP (Monk and Wagner, 2008), as demonstrated in Fig. 7. In the project preparation phase, main decisions regard project approval and funding. Goals, objectives, and scope (what the ERP is to accomplish) of the project must be carefully defined. Typical tasks include organizing the project team, selecting the package — hardware and database vendors, identifying e-prioritizing the business process to support, communicating to the personal objectives and impacts of the new system, evaluating financially the investment, and fixing the budget. Common risk factors include low top-management commitment, unrealistic objectives, inadequate budget, poor skills of the project team in ERP introduction, inadequate choice of (and contracting with) vendors, consultants or system integrators, lack of comprehension of opportunities to improve the process, and underestimating the difficulty of change management. Key actors are IT specialists, line of business managers (cross-functional competences are required), ERP systems vendor and integrators, and consultants. The main objective of the Business Blueprint phase is to develop a detailed documentation of how the business process has to be managed and supported by the enterprise system. This documentation, sometimes called “Business Transformation Master Plan”, defines specifications to configure and eventually customize the system in the next phase. Typical tasks include the development of a detailed project plan, definition of KUs, KUs/project team education and acquisition of support skills, detailed process
Figure 7.
Project life cycle (SAP implementation roadmap).
March 15, 2010
312
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
mapping (AS IS) and definition of process reengineering needs according to the procedures incorporated in the system, gap analysis and action plan to solve variances), and identification of a legacy systems treatment (elimination, integration, method of data retrieval, clean up, and transfer to the ERP database). Main risk factors are a lack of cross-functional representation and skills, poor-quality software documentation and training material, and the absence or insufficient attention paid to gap analysis. Key actors are organizational crossfunctional members, project team, and vendors/consultants/system integrator business analysts. The Realization phase covers the core activities to get the system up and running, through its configuration and hardware — network connection, the actual reengineering of processes, and the execution of a change management plan (if any). Typical tasks, besides configuration, are system integration, data clean up and conversion, education of KUs–IT staff–executive, development of standard prototype without detailed interfaces and reports. Main risk factors are an inadequate knowledge of consultants and vendor personnel, too many customizations, poor attention to data clean-up, difficulty in acquiring knowledge of software configuration, the rescheduling (shortening) of these, and the following phases because of the “scope screep effect” (Monk and Wagner, 2006). Scope screep refers to the unplanned growth of project goals and objectives that leads to project overtime/costs and is discovered only in this phase. Management often chooses to omit or shorten the Final Preparation phase, thereby reducing or eliminating end users’ training and software testing. So, costs savings gained are overshadowed by productivity losses, consulting fees, and the prolonging of the period from “Go Live” until “normal operation.” The key actors are the same as in the previous phase, plus end users that begin their training. The final preparation phase encompasses critical tasks such as testing the ERP in critical processes, prototype completion with reports and interfaces, users’ enablement and final training help desk implementation, bug-fixing, final tuning/optimization of data and parameters, ending the data migration from legacy systems, and setting the go live date. The main risk factor which can occur is the above-cited effect of scope screep. Another typical error to prevent is assuming that the end user’s training should be funded from operations budget. Often too many and hard customizations do not work and lead to long rework activities. Key actors are the same as in the previous phase. The final phase of Go Live & Support starts from Go Live and ends when “normal operations,” with processes fully supported and no external support, has been achieved. Most of end users’ problems arise during the first few weeks, so monitoring of the system is critical in order to quickly arrange changes if performance is not satisfactory. Typical tasks are supporting the Help Desk with project team reworking skills, final bug-fixing and reworking, monitoring of operative performances of the new system, and adding capacity. In this phase, we may observe the effects, problems related to a bad management of the cited risk factors: underuse/no use of
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 313
the system, data input errors, excessive dependence on KUs and external parties, retraining, difficulty in diagnosis and solving software problems, and over-length of the same phase. Key actors are IT specialists and members of the project team that staff the Help Desk, operations managers, and external technical support personnel. 3.4.2. Treatment strategies e-risk factors profile The literature describes generic options for responding to project risks (DeMarco and Lister, 2003; Kerzner, 2003; Schwalbe, 2008). Within these high-level options, specific responses can be formulated according to the circumstances of the project, the threat, the cost of the response, and the resources required. Here, we report four common risk response strategies. Avoidance strategies aim to prevent a negative effect from occurring or impacting a project. This can involve, for example, changing the project design so that the circumstances under which a particular risk event might occur cannot arise, or so that the event will have little impact on the project if it does. For example, planned functionality might be “de-scoped” to remove a highly uncertain feature to a separate phase or project in which more agile development methods might be applied to determine the requirement (Boehm and Turner, 2003). Transference strategy involves shifting the responsibility for a risk to a third part. This action does not eliminate the threat to the project; it just passes the responsibility for its management to someone else. Theoretically, this implies a principal-agent relationship wherein the agent is better able to manage the risk, resulting in a better overall outcome for the project. This can be a high-risk strategy because the threat to the project remains, which the principal must ultimately bear, but direct control is surrendered to the agent. Common risk transfer strategies include insurance, contracts, warranties, and outsourcing. In most cases, a risk premium of some kind is paid to the agent for taking ownership of the risk, as also sometimes penalty is included in the contracts. The agent must then develop its own response strategy for the risk. Risk Mitigation strategy is one or more reinforcing actions designed to reduce a threat to a project by reducing its likelihood and/or potential impact before the risk is realized. Ultimately, the aim is to manage the project in such a manner that the risk event does not occur or, if it does, the impact can be contained at a low level (i.e., to “manage the threat to zero”). For example, using independent testers and test scripts to verify and validate software progressively throughout the development and integration stages of a project may reduce the likelihood of defects being found postdelivery and minimize project delays due to software quality problems. Risk acceptance strategy can include a range of passive and active response strategies. One is to passively accept that the risk exists but chooses to do nothing about it other than, perhaps, to monitor its status. This can be an appropriate response
March 15, 2010
14:45
314
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
when the threat is low and the source of the risk is external to the project’s control (Schmidt et al., 2001). Alternatively, the threat may be real but there is little that can be done about it until it materializes. In this case, contingencies can be established to handle the condition when and if it occurs (as a planned reaction). The contingency can be in the form of provision of extra funds or other reserves, or it can be a detailed action plan (contingency plan) that can be quickly enacted when problem arises. To consider such strategies can be useful for a better understanding of the risk factors and design the RMP. Furthermore, we suggest using a context diagram (Fig. 8) as suggested in the SAFE methodology (Meli, 1998), to add information about the risk factors profile, in terms of control, consideration, and influence. Here, we allocate factors and subjects, respectively passive elements with no decision-making power (events, normative, specifications, etc.) and entities able to decide and influence the project success (IT manager, consultants, top management, project management tools etc.). Factors are allocated in Classes, the cloves, such as technology, normative, management, organization, strategy, and so on. The circle’s crowns are related to the capacity of the project (project manager and team) to influence, consider, or control the element along a continuum. The frame below, on the right side (Fig. 9) shows how potential actions can be planned during the different ERP project life-cycle stages, according to the Control: Total control, we may reduce to zero the probability of manifestation of a risk factor.
Elements: Factors Subjects
· ·
Classes
Capacity Control Consideration Influence
Figure 8.
Life Cycle
Consideration: We cannot modify the element and must adopt slack resources and adaptive strategies of the avoidance type Influence: By the project, we may try to reduce the probability of occurrence and/or reduce the impact, with uncertain outcome
Context diagram.
Risk Factor Profile Risk Factors
Phase
Risk Management Strategy Mitigation
Detectability
Controllability
Responsibility
Dependence
Avoidance
Transference Likehood
Effects
(action j)
…
R1 Phase 1:n
…
…
…
…
…
(action i)
R19
Figure 9.
Information to design the RMP.
…
…
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 315
suitable RM strategies and to the risk factors profile. This can be a good reference for the design of the RMP. Obviously, we must correlate this information to external/internal responsibility. For example, we consider the risk factor “Poor Project Team Skills.” This factor causes negative effects all along the lifecycle and has to be managed at the end of Project Preparation phase, with the right selection and roles assigned to project team members. It is a typical element/subject belonging to a class like “competences” to deal with an avoidance strategy. Top management and the Steering Committee can eventually fully control the necessary skills and by recruiting outsourcing, and through the vocational training of internal resources. 3.5. Risk Control Risk control phase aims to increase the effectiveness of the RMP during the time. It consists of: • Communication and consulting, the process of or exchanging/sharing information about risk between the decision maker and the other stakeholders inside and outside an organization (e.g., departments and outsourcers, respectively). Information can relate to the existence, nature, form, probability, severity, acceptability, and treatment of risk (ISO/IEC Guide, 2002). • Monitor and review, the process for measuring the efficiency and effectiveness of the organization’s RMPs and the establishment of an ongoing monitor and review process. This process makes sure that the specified management action plans remain relevant and updated. It also implements the control activities including the re-evaluation of the scope and compliance with decisions (ENISA Study, 2007). Obviously, Risk Communication is a necessary condition to enact the RMP, while Monitor and Review encompasses the typical feedback control depicted in Fig. 10. The outcome of the effectiveness analysis can be formalized in a Risk Management Report (RMR) containing information about events that occurred, strategies and actions executed, and their success. The feedback can lead to changes in the RMP and even redefine the risk factors. Risk treatment itself sometimes introduces new risks that need to be identified, assessed, treated and monitored. If a residual risk still remains after treatment, a decision should be taken whether to retain this risk or repeat the risk treatment process. The effectiveness analysis step theoretically requires some metrics related to a valuation of risk before the RMP implementation (unconditioned risk, if we suppose the risk factors free to occur) and after the RMP actions, along the life cycle. To define such indicators, the control and the evaluation of the expected risk reduction are complex and challenging tasks. Just a few contributions exist in literature and, in our advice, the technique of the RMP Effectiveness Index is the most interesting, as reported in the SAFE methodology (Meli, 1998). In our context, we may state
March 15, 2010
316
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Figure 10.
Control cycle.
that an indirect evaluation based on a set of balanced success metrics (technical, financial, organizational) at different points in time (Markus and Tanis, 2000a) is more useful: • Project metrics. Typical performance measures of project management related to planned schedule, budget and functional scope, typically Earned Value Analysis-based. • Early operational metrics. Metrics related to the Go Live & Support phase. Although this is a transitional phase, the period from Go Live until the normal operation is critical: organization can lose sales, need additional investments and exceedingly poor performance can lead to pressures to uninstall the system. Typical metrics (for a manufacturing firm) include short-term changes in labor costs, time to fill an order, inventory levels, reliability of due date based on the ERP “Available To Promise” ability, order shipped with errors, but also system down time and response time, employee job quality/stress levels and so on. Such metrics support in monitoring the system in this phase to quickly point out and solve problems. • Long-term business results. Some typical relevant metrics (others will be contextspecific, goals and objectives-related) include business process performance, end users’ skills, ease of migration/upgrading, competence of IT specialists, cost savings, and competence availability in IT investments subsequent to ERP (as in a data warehouse and Business Intelligence solution, which take advantage from the ERP database, data clean up, and so on).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 317
According to Markus and Tanis (2000a), disastrous Project Implementation and Go Live & Support metrics are sometimes coupled with high levels of subsequent business benefits from ERP. Conversely, sometimes projects with acceptable Implementation and Go Live & Support metrics do not have business benefits from installing the system in the long term. 4. Conclusions and Managerial Implications In this chapter, we focused on the importance of the RM through the ERP implementation life cycle and suggested an RM approach to manage the project. The main result was to provide managers with a guideline to support the RM process in each phase of the project life-cycle; with this aim, we discussed techniques and tools which could support managers both during the early project assessment stages and during the implementation ones. The information addressed provides managers and researchers with advices and suggestions along all the phases of the RM process customized to a typical ERP project. In particular, the chapter achieves: (a) a classification of the risk factors and the effects in ERP projects; (b) suggestions for a systematic analysis of the risk factors inter-dependencies and their causal links to potential effects, useful in the next stages of assessment; (c) considerations on tools and techniques which could support managers in risk evaluation; and finally (d) a methodological guide for the selection of suitable risk response strategies and control procedures. Regarding managerial implications, here we give suggestions and evidence from the approached research for a right approach to RM in an ERP introduction project (Schwalbe, 2008). In the context analysis phase, the top-management’s commitment is essential in order to define the objectives and bonds of the project, and to decide if and how to approach and plan the RM activities. This implies verifying if the organizational skills, experience, and competencies as well as the strength of the commitment are consistent with the objectives and bonds. In this concern, the project team should review project documents and understand the organization’s and the sponsor’s approach to risk. In the risk assessment phase, the project team can use several risk identification/evaluation tools and techniques, such as brainstorming, the Delphi technique, interviewing or SWOT analysis, risk matrix, and probabilistic networks. What should be considered is that an ERP project incorporates technological, organizational, and financial risks. The imperative in this field is: focus on business needs first, not on technology! Any list of risk factors should include problems related to underestimating the importance of process analysis, requirements definitions and business process reengineering, a proper education, and training both for employees and managers; moreover, attention should be addressed to risk factor interdependencies. The suggested risk response planning approach (strategies of avoidance, acceptance, mitigation, transference) is widely accepted and used, and the project team should involve external and internal IT, financial, project management skills to
March 15, 2010
318
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
choose, for each strategy, the right actions with regard to technical, cost/financial, and schedule risks. In the phase of risk control, risks should be monitored and decisions about mitigation strategies should be made based on pre-established milestones and performance indicators. Finally, a further risk is related to the change management activities; an active top-management support is important for successful acceptance and implementation of companywide changes. References Ahmed, A, B Kayis and S Amornsawadwatana (2007). A review of techniques for risk management in projects, Benchmarking: An International Journal, 14(1), 22–36. Aloini, D, R Dulmin and V Mininno (2007). Risk management in ERP project introduction: Review of the literature. Information & Management, 44, 547–567. Aloini, D, R Dulmin and V Mininno (2008). Risk Assessment in ERP introduction projects: Dealing with risk factors interdependence. In 9th Global Information Technology Management Association (GITMA) World Conference, Downtown Atlanta, Georgia, USA, June 22nd–24th. Anderson, ES, KV Grude and T Haug (1995). Goal-directed Project Management: Effective Techniques and Strategies, 2nd edn., Bournemouth: PricewaterhouseCoopers. AS/NZS 4360 (1999). Risk Management. Strathfield, Standards Association of Australia. Available at www.standards.com.au. Beard, JW and M Sumner (2004). Seeking strategic advantage in the post-net era: Viewing ERP systems from the resource-based perspective. The Journal of Strategic Information Systems, 13(2), 129–150. Boehm and Turner (2003). Balancing Agility and Discipline: A Guide for the Perplexed. Boston, MA: Addison-Wesley Longman Publishing Co., Inc. Chapman, C and S Ward (2003). Project Risk Management: Processes, Techniques and Insights. John Wiley. Chen, IJ (2001). Planning for ERP systems: Analysis and future trend. Business Process Management Journal, 7(5), 374–386. Cleland, DI and LR Ireland (2000). The Project Manager’s Portable Handbook. Professional, New York, NY: McGraw-Hill. DeMarco, T and T Lister (2003). Risk management during requirements. IEEE Software, 20(5), 99–101. DeSanctis, G (1984). A micro perspective of implementation. Management Science Implementation, Applications of Management Science (Suppl. 1), 1–27. ENISA Study (2007). Emerging-Risks-Related Information Collection and Dissemination, February, www.enisa.europa.eu/rmra. ISO/IEC Guide 73:2002 (2002). Risk Management. Vocabulary. Guidelines for use in standards ISBN: 0 580 40178 2, 1–28. Kerzner (2003). Project Management: A Systems Approach to Planning, Scheduling, and Controlling, 9th Edn., CHIPS. Leonard-Barton, D (1988). Implementation characteristics of organizational innovations. Communications Research, October, 603–631. Lucas, HC (1975). Why Information Systems Fail. New York: Columbia University Press.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 319
Lyytinen, K and R Hirschheim (1987). Information systems failures: A survey and classification of the empirical literature. Oxford Surveys in Information Technology, 4, 257–309. Markus, ML and C Tanis (2000a). The enterprise systems experience — From adoption to success. In Framing the Domains of IT Research: Glimpsing the Future Through the Past, Zmud, RW (ed.), Cincinnati, OH: Pinnaflex Educational Resources, Inc. Markus, ML, S Axline, D Petrie and C Tanis (2000b). Learning from adopters’ experiences with ERP: Problems encountered and success achieved. Journal of Information Technology, 15, 245–265. Meli, R (1998). SAFE: A Method to Understand, Reduce, and Accept Project Risk. ESCOMENCRESS 98, Project Control for 2000 and Beyond, Rome, Italy, May 27–29. Monk, E and B Wagner (2006). Concepts in Enterprise Resource Planning, Mac Mendelsohn (ed.), 2nd Edn., Canada: Thomson Course Technology. PMI (2001). A Guide to the Project Management Body of Knowledge (PMBOK Guide), 2000 Edn., Project Management Institute Publications. PMP research (2001). Industry Reports: Infrastructure Management Software. www.conferencepage.com/pmpebooks/pmp/index.asp. Sage, AP (1977). Interpretative Structural Modelling: Methodology for Large Scale System, pp. 91–164. New York: McGraw Hill. Schmidt, R, K Lyytinen, M Keil and P Cule (2001). Identifying software project risks: An international Delphi study. Journal of Management Information Systems, 17(4), 5–36. Schultz, RL, MJ Ginzberg and HC Lucas (1984). A structural model of implementation, in management science implementation. Applications of Management Science (Suppl. 1), 55–87. Schwalbe, K (2008). Information Technology Project Management, 5th Edn., Cengage Learning, ISBN-13: 9781423901457. Soh, C and ML Markus (1995). How IT creates business value: A process theory synthesis. In Proceedings of the 16th International Conference on Information Systems, Amsterdam, December. Wei, CC, CF Chien and MJ Wang (2005). An AHP-based approach to ERP system selection. International Journal Production Economics, 96, 47–62. Xu, H, JH Nord, N Brown and GD Nord (2002). Data quality issues in implementing an ERP. Industrial Management & Data Systems, 102(1), 47–60.
Biographical Notes Davide Aloini was born in Catania (CT). He has been a PhD student in EconomicManagement Engineering at Rome’s Tor Vergata University since 2005. He received his BS and MS degree in Management Engineering from the University of Pisa and his Master in Enterprise Engineering from the University of Rome “Tor Vergata.” He is also working in the Department of Electrical Systems and Automation of Pisa University. His research interests include supply chain information management, ERP, risk management e-procurement systems, business intelligence, and decision support systems.
March 15, 2010
320
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Riccardo Dulmin was born in Piombino (LI). He graduated in Electronic Engineering from the University of Pisa and has been working with the Economics and Logistic Section of the Department of Electrical Systems and Automation since 1996; from 1999 to 2006, he was a researcher and assistant professor in Economics and Managerial Engineering; since 2006 he has been an associate professor of Information Technologies for Enterprises management. His research interests include supply chain management study, the development and applications of decision analysis, artificial intelligence tools and techniques in operations management, and informative systems. Valeria Mininno was born in La Spezia. She graduated in Mechanical Engineering at the University of Pisa in 1993. From 1994 to 2001, she was a researcher and assistant professor in Economics and Managerial Engineering; since then she has been an associate professor of Business Economics and Supply Chain Management. Her main research interests are in supply chain management study, the development and applications of decision analysis, artificial intelligence tools and techniques and their use in operations management.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Part III Industrial Data and Management Systems
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Chapter 14
Asset Integrity Management: Operationalizing Sustainability Concerns R. M. CHANDIMA RATNAYAKE Center for Industrial Asset Management (CIAM), Faculty of Science & Technology, University of Stavanger-UiS, N-4036, Stavanger, Norway
[email protected]
The complexity of integrating the concept of sustainable development and the reality of asset integrity management (AIM) practices has been argued. It is important for establishing and consummating an AIM system with practical application value as a whole over the integrity management system. Identifying and prioritizing asset performance through identified risk, detecting and assessing data, resulting in saved costs in the areas of design, operation, and technology application are addressed through sustainability lenses. The research study surfaced over a project initiated to develop governing documents for a major operator company for assessing asset integrity (AI), focusing particularly on design, operational, and technical integrity. The introduction of a conceptual framework for AIM knowledge along with coupled tools and methodologies is vital, as it relates to sustainable development regardless of whether the particular industry belongs to the public or private sector. The subsequent conceptual framework for sustainable asset performance reveals how sustainability aspects may be measured effectively as part of AIM practices. Emerging AIM practices that relate to sustainable development do emphasize design, technology, and operational integrity issues for splitting the problem into manageable segments and alternatively, measure organizational alignment for sustainable performance. The model uses the analytic hierarchy process (AHP), a multicriteria analysis technique that provides an appropriate tool to accommodate the conflicting views of various stakeholder groups. The AHP allows the users to assess the relative importance of multiple criteria (or multiple alternatives against a given criterion) in an intuitive manner. This holistic approach to managing AI provides improvement initiatives rather than a seemingly ad hoc decision making. The information in this chapter will benefit plant personnel interested in implementing an integrated AIM program or advancing their current AIM program to the next level. Keywords: Asset integrity management; substainability; performance measurement; analytical hierarchy process.
323
March 15, 2010
324
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
1. Introduction The decade of the 1970s was a watershed year for International Environmentalism. Alternatively, the first US Earth day was held in 1970, the same year as the US Environmental Protection Agency (EPA) was created. The first United Nations (UN) conference on Human Environment was held in Stockholm in 1972, which led to the formation of the United Nations Environmental Program (UNEP). The UN then set up the World Commission on Environment and Development, also called the Brundtland commission, that defined sustainable development in their 1987 report, “Our common future” (see Ratnayake and Liyanage, 2009) as “meets needs of the present generation without compromising the ability of the future generations to meet their own needs.” Since then the influence of the concept has increased and it features increasingly as a core element in policy documents of governments and international agencies (Mebratu, 1998). For an instance, in the same decade, governments reacted to the public concern about the environment by enacting a raft of legislation. For example, the US Congress enacted the seminal legislation for clean water, clean air, and the management of waste. The hard work of activists and writers such as Rachael Carson with her 1962 book, Silent Spring (Carson, 1962), had started to pay off. The response by industry to the call for regulation and public concern was to design and implement management systems for health, safety, and environment (HSE) to assure management, shareholders, customers, communities, and governments that industrial operations were in compliance with the letter and the spirit of the new laws and regulations. Corporate environment was comprehensively incorporated into corporate policies and procedures during the late 1980s and early 1990s. These management systems grew during the 1990s, there was growing recognition of the interrelationship between economic prosperity, environmental quality, and social justice. The phrase “sustainable development” became the catchword in government and corporate circles to include these three pillars of human development. A more recent definition of the concept of sustainability was presented by John Elkington in his book, Cannibals with Forks. Elkington describes triple bottom line (“TBL”, “3BL”, or “People, Planet, Profit”) concept, which balances over an expanded spectrum of values and criteria for measuring organizational success through economic, environmental and societal conditions (Elkington, 1997; Ratnayake and Liyanage, 2007, 2008). For an example, the development of innovative technologies has played an important role in increasing the global competitive advantage of high-tech companies (Ma and Wang, 2006). On the other hand “[W]ho can resist the argument that all assets of business should contribute to preserving the quality of the societal and ecological environment for future generations”? The need to incorporate the concept of sustainable development into decision-making, combined with the World Bank’s three-pillar approach to sustainable development, resulted in the popular business term “triple-bottom-line decision making” (World Bank, 2008).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
325
The World Summit on Sustainable Development (WSSD) in 2002 highlighted the growing recognition of the concept by governments as well as businesses at a global level (Labuschagne and Brent, 2005) and demonstrated very clearly that it is not practical to consider environmental issues separate from socioeconomic issues such as health and safety, poverty, etc. The term “sustainable development” is therefore used in the sense of sustaining human existence, including the natural world, in the midst of constant change. It would be a mistake to use the term to mean no change or to assume that we can freeze the status quo of the natural world. However, the rate of change is an important consideration. Perhaps the challenge to the modern business world should be more properly named “management of change” instead of “sustainable development.” 2. Sustainability in Industrial Asset Performance The term “sustainable development” in the context of asset integrity management (AIM) is not used to mean sustaining the exploiting of an asset indefinitely. Rather, it means meeting the needs of the global society for producing a product at a reasonable cost, safely, and with minimal impact on the environment. The traditional industrial model focused on labor productivity as the road block toward local and global industrial sustainability, while assuming nature would allow exploiting the resources available indefinitely. On the contrary, most industrial minds are reluctant to change their mindset to get the benefit of resources productivity. Consequently, many companies have not paid enough attention quantifying the link between sustainability actions, sustainability performance and financial gain, and on making the “business case” for corporate social responsibility. Instead, they act in socially responsible ways because they believe it is “the right thing to do.” The identification and measurement of societal and environmental strategies is particularly difficult as they are usually linked to long-time horizons, a high level of uncertainty, and impacts that are often difficult to quantify. This clearly signifies as per the first EPA administrator William Ruckelshaus: “Sustainability is as foreign a concept to managers in capitalist societies as profits are to managers in the former Soviet Union.” (Hart and Milstein, 2003). That is, for some managers, sustainability is a moral mandate and for others, a legal requirement. Yet others view sustainability as a cost of doing business — a necessary evil to maintain legitimacy and right to operate. A few firms such as HP, Toyota, etc. have begun to frame sustainability as a business opportunity, offering avenues for lowering cost and risk, or even growing revenues and market share through innovation (Holliday, 2001). The detection of enterprise sustainability remains difficult for most firms due to maturing assets, misalignments within the organization, no mechanism to recognize present alignment of sustainability concerns to realize gaps, etc., which in turn may be reconciled with the objective of increasing value for the firm itself, as well as its stakeholders. On these grounds, AIM was initially conceived to focus on industries related to hazardous type operations such as oil and gas, nuclear power, etc. For
March 15, 2010
326
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
example, the offshore oil and gas industry on the UK Continental Shelf (UKCS) is a mature production area. Much of the offshore infrastructure is at, or has exceeded, its intended design life. This is due to an apparent general decline in the condition of the plant’s installations, scheduled to run Key Program 3 (KP3) focusing on AI during the period of 2004–2007. Figure 1 illustrates a holistic view of how the AIM backbone supports to sustainability approaches. As per Lord Kelvin, “When you can measure what you are talking about and express it in numbers you know something about it, but when you cannot measure it, when you cannot express it in numbers, your knowledge is a meager and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely in your thoughts, advanced to the stage of science” (Ratnayake and Liyanage, 2009). For managing industrial assets, there must be a way to measure the assets’ performance. The late Peter Drucker has influenced generations of managers with his admonition: “If you can’t measure it, you can’t manage it”. That is, while it is necessary to manage an organization — be it a nuclear installation, an O&G plant, or a childwelfare agency — the managers have to be able to measure what they are doing (Bhen, 2005) to manage as desired. To achieve the so-called sustainability in a commercial organization, it has to design and then adopt their asset management structure, policies, and procedures to guide and regulate its internal practices. Asset upholding is seen as a cost center according to classical economic theories. Nevertheless, in some of leading companies like Toyota, HP, Shell, etc., managers have begun to realize the importance of intangibles and to reexamine industrial operations through value-added lens. Holistic sustainable industrial asset performance domain Systemic analyses domain Optimization domain
Long-term focus
Short-term focus
Long-term focus
Economy−Environment−Society Sustainable asset performance Stakeholder needs-based conceptualization
DI, OI and TI
Inputs
Financial, human, information, and physical assets
Outputs
TBL impact assessment
Trends for sustainability
AIM
Figure 1. Industrial asset performance in Triple Bottom Line (TBL) sustainability and AIM point of view.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
327
Hence, asset upholding is now seen not only as a cost, but also as a process with significant potential to add value for long-time survival in a competitive business world. More recent publications that have brought this issue into open discussion include Liyanage (2003), Liyanage and Kumar (2003), Jawahir and Wanigaratne (2004), Liyanage (2007), and Ratnayake and Liyanage (2007). One of the critical elements of sustainability in the industrial world lies in understanding the role that industrial assets play in this process. Because industrial assets often drive the way in which they consume resources, create waste, and structure society, their role is not insignificant. In fact, some neoclassical economists believe that many pessimistic views of resource scarcity are driven by a misunderstanding of the powerful substitutability between industrial assets (technology related) and natural resources (Stiglitz, 1979). It is recognized that right priorities are critical ingredients in the operationalizing sustainability concerns in the AIM recipe. This chapter devises an “operational knowledge tool” that can help determine the present priorities of an entire industrial organization for managing assets. Consequently, the results can be used to align (manage) the entire organization with best suited to the needs of a sustainable industry. 3. What Is Asset Management (AM)? As per the Xerox Corporation, “Asset management is the process of reusing an asset (machine, subassembly, piece part and packing material either by remanufacturing to its original state, converting to a different state or dismantling to retrieve the original components” (Boateng et al., 1993). The new British Standard, PAS 55, endorses the need for primary, performance accountable asset (or business) units, with secondary “horizontal” coordination and efficiency aids through asset-type specializations, common service providers, standards, etc. However, not many managers involved with AM can really claim to have such a structure in place yet. PAS 55 provides a holistic definition for AM: “Systematic & coordinated activities and practices through which an organization optimally manages its physical assets and their associated performance, risks and expenditures over their lifecycles for the purpose of achieving its organizational strategic plan.” Hence, AM can be considered as the optimum way of managing assets to achieve a desired and sustainable outcome” (PASS-55-1, 2004). Consequently, it can also be concluded that “AM is the art and science of making the right decisions and optimizing the related processes.” The management of “physical assets” (for instance, design, selection, maintenance, inspection, renewal, etc.) plays a key role in determining the operational performance and profitability of industries that operate assets as part of their core business. For AM to live to these key roles, it has to meet a number of challenges. Some of challenges are (Wenzler, 2005). 1. Alignment of strategy and operations with stakeholder values and objectives 2. Balancing of reliability, safety, and financial considerations
March 15, 2010
328
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
3. Benefiting from performance-based rates 4. Living with the output-based penalty regime, etc. The fundamental asset management tasks cover aspects from technical issues like maintenance-planning or the definition of operational fundamentals to more economical themes like the planning of investments and budgeting, and end up in strategic planning issues. The general configuration of asset-centered organization can be visualized as shown in the Fig. 2. Investments depend on availability of money, which is directly influenced by internal and external subcontractures while capital expenditure (CAPEX) and operational expenditure (OPEX) improve the net performance accountable. Thus each company should not only compete efficiently, but also manage knowledge, strategic costs, and strategic advantages by tracking value creation through each asset (Sheble, 2005). The value added by each asset is based on its value to the supply chain and it should be properly introduced and managed to provide ultimate stakeholder satisfaction. But one can ask the following questions himself: • How does a company get there? • How do they know, and demonstrate, what is “optimal?” • How do they coordinate component activities towards this goal? Net performance accountable
ctu on bc su al ern Int
res
ctu
tra
Multidisciplined team
on
bc
l su
Asset manager
na ter
tra
Ex
res
CAPEX and OPEX responsible
Discrete asset system boundaries Performance measurable Clear contracts/service level agreements/ alliances, etc.
Figure 2. Asset-centered organization.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
329
• How can the responsibility towards integrated, sustainable performance be instilled? • How do we develop the skills, tools, and processes to establish and sustain such an environment in the first place? To handle the whole picture, the relationships between human, financial, information, infrastructure, and intangible assets and physical “assets” must be well understood. Figure 3 illustrates how physical assets are coexistent with financial, intangible, information, and human assets, while the area within “PQRS” can be considered as pure infrastructure assets like buildings, machines, inventories, etc. (PASS-55 1&2, 2004). Table 1 illustrates how physical assets are interconnected with financial, intangible, information, and human assets through AI lenses, and surrounded by the industrial world. The concept of asset management is difficult to accept as a philosophy and to implement in practice because asset management means different things to people who work in dissimilar disciplines. For example, some disciplines in an organization may feel they already have an asset management system in place when they have only implemented an inventory-control system. Each stakeholder in a company may target changes in assets for different goals. Those in maintenance might view assets as machines that need to keep working. To those in finance, the assets
s et ss A al ic ys Ph
Figure 3. 2004.)
Physical assets through assets integrity lenses. (Adapted from PASS 55-2,
March 15, 2010
14:45
330
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Table 1.
Interconnecting Relationships of Physical Assets Through AI Lenses.
Intersection Financial and physical Intangible and physical Information and physical Human and physical
Example cases for interconnecting relationships with physical assets Life-cycle costs, capital investment criteria, operating costs, depreciation, taxes, etc. Reputation, image, morale, health and safety constraints, social and environmental impacts, etc. Condition monitoring, performance and maintenance activities, overheads and opportunities, etc. Training, motivation, communication, roles and responsibilities, knowledge, experience, skills, competence, leadership, teamwork, etc.
represent bundles of capital and cash flow, and may be tempted to covet them without regard for their true purpose. Distribution looks at assets as a means of more effectively transporting goods. Field managers see them as ways of getting products into warehouses or transporting them. Manufacturing may seem them as enablers of quality. IT may view them as enablers of information management. Many organizations say “people are our most important asset” but understand little in relation to physical assets about what it means to nurture and develop them. The CEO sees all of them as competitive differentiators (Woodhouse, 2001). Because of these obvious differences among organizations and within the same organization, the asset management implementation plan should have a series of overlying principles established at a high level. Hence, the asset management implementation plan should be (PASS 55 1&2, 2004): • Holistic: looking at a big picture, i.e. integrating the management of all aspects of the assets (physical, human, financial, information, and intangible assets) rather than a compartmentalized approach. • Systematic: a methodical approach, promoting consistent, repeatable decisions and actions, and providing a clear and justifiable audit trial for decisions and actions. • Systemic: considering the assets as a system and optimizing the system rather than optimizing individual assets in isolation; “where as Goldratt said, “A system of local optimums is not an optimum system.” It can result in islands of productivity within a factory that, overall, is a mess.” • Risk-based: focusing resources and expenditure, and setting priorities, appropriate to the identified risks and the associated cost/benefits. • Optimal: establishing an optimum compromise between competing factors associated with the assets over their life cycles, such as performance, cost, and risk. • Sustainable: considering the potential adverse impact on the organization in the long term of short-term decisions aimed at quick wins. This requires achieving
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
331
t se As nt
me
ge
na
ma
Figure 4.
Role of the asset manager in industry.
an optimum compromise between performance, costs, and risks over the assets’ life cycle or a defined long term. This would be difficult to achieve with separate capital and operating expenditures and annual accounting cycles. However, performance accountability and investment/expenditure responsibility should be more closely linked and lied within the asset management context. Figure 4 illustrates the role of asset manager in an industrial organization. The understanding of what is worth doing, why, when, and how including the linkages between asset management strategy and the overall objectives and plans for the entire organization, is of prime importance. As suggested by PAS 55, assets are not all the same — there is diversity in asset type, condition, performance, and business criticality. Mapping what is worth doing to which and when is complex, dynamic, uncertain, and involves a mix of outputs, constraints, and competing objectives. Dividing the whole problem into manageable components and understanding the asset management system boundaries is vital. Figure 5 illustrates how these complexities are interlinked. 4. The Origins of “Integrated, Optimized Asset Management” The best asset leaders pride themselves on being able to deal with tough asset situations and solving problems with their tools in a proactive manner, while most managers at present spend their time reacting to breakdowns and emergencies, or planning ahead on how to optimize their assets for business performance. There is certainly a big contrast between merely “managing the assets” (which many companies would feel they have been doing for decades) and the integrated, optimized whole-life management of physical, human, intellectual, reputation, financial, and
March 15, 2010
332
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake Legal and stakeholder requirements and expectations Customers, shareholders, suppliers, regulators, employees, society, etc.
Business plans
Asset management system
Priorities for continuous improvement
Optimized AM strategy: Objectives, plans, performance targets, etc. Asset system individuality : Performance, condition, criticalities, needs, etc.
Design integrity
Bottom to top
Top to bottom
Asset management policy
Performance and condition monitoring
Operational integrity
Technical integrity Asset systems or business units
Figure 5. Relationship of “top to bottom and bottom to top” w.r.t. asset management system. Adapted from PAS 55-1 and 2, 2004.
other assets. The acquisition, use, maintenance, modification, and disposal of critical assets and properties are vital to most businesses’ performance and success. Globalization, shifting labor costs, maturing assets, and sustainability concerns all create pressure to further improve current asset performance. The word sustainability, especially in natural resource-dependent industries, has become a priority (for an instance Shell, BP, Toyota, etc.).As the changing nature of legal requirements and stakeholder pressure upset the “profitability equation,” businesses started searching for areas to recalibrate quickly to stay competitive. On these grounds, performance on physical assets and AM became key to altering the profitability equation. Figure 6 illustrates the evolution of AM and corporate thinking. Over the decades, AM has transformed from a “necessary evil” to what it is today, where companies look at entire asset lifecycles and align AM to strategic and sustainable goals. In the near future, we can expect to see more technology integrated into the assets themselves to address sustainability issues. Technologies such as self-diagnostics, radio-frequency identification (RFID) chips, etc., along with AI
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
Strategic and sustainable
Value/impact
Operational
Necessary evil • Paper systems • Corrective maintenance • Frequencybased PMs, etc.
1970
Early automation • New technologies • PdM • Software systems • Change PM systems, etc.
1980
Systemized management
Lifecycle awareness
• Maturing organizations • Searching for • Looking at root causes lifecycles • Mature • Upstream software improvements systems • Wireless • Product design technologies for • Software manufacturing, etc. adjusts to business, etc.
1990
333
Distributed intelligence • Self-diagnostics • Communicating assets • Product and process improvements for sustainability, etc.
2000
2010
Figure 6. The evolution of AM and corporate thinking.
techniques will enable the communication of status, breakdowns, and performance metrics directly to management systems in real-time. The rush in corporate and regulatory interest over the last quarter century for better, optimized, integrated, i.e., financially viable, environmentally friendly, and without affecting health and safety within and surrounding an industrial organization has gathered considerable momentum for managing the assets (Ciaraldi, 2005). For example, the oil and gas sector in the European North Sea has had longest to prove the necessity for integrated, optimized AM, starting with the wake-up calls of the late 1980s: the Piper Alpha disaster, Brent spar incident, the oil price crash, Lord Cullen’s recommendations on risk/safety management, market globalization, etc. These examples provide several implications for the smart asset manager. First, the asset manager must understand that their function is constantly changing. This means that they will need to understand and apply new practices and new technologies to address new challenges. From a reactive perspective, these managers will have to adapt and evolve to stay competitive, keep up with customers, be compatible with supply and distribution partners, and stay on top of the constant top-down mandates to perform better and reduce costs. The second implication, and perhaps more offensive, is the opportunity for excellence, where the leaders in AM do not just follow the evolution, but help create it. The best and boldest asset managers will see the changes in practices, design, and technology as a new opportunity to serve their business better, drive competitive differentiation, and show leadership to customers (Alguindigue and Quist, 1997). The better use of their asset portfolio to meet the many demands of the stakeholders should bed interest in financial performance, design and production
March 15, 2010
334
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
process, operational effectiveness, etc. Those that see the evolution as an opportunity may find themselves in unique positions to add proactive value to the organization and stand to contribute to the business both operationally and strategically. The evolution implies a need to sort and examine an increasingly complex, more tied array of options. The asset manager is supposed to come up with ideas that are “best practices” — the question is best practices are to be found. As each business has its unique needs and limited funds to invest, the challenge to the asset manager will be to sort through the mess and glitz of the fashionable “best practices” buildup and find the practices that suit them. In other words, finding the best practices is the ones that align organizational strategy with sustainability concerns and drive more value to the “triple bottom line.” In this context, it is important to measure the present organization’s priorities and recognize the gaps with respect to what should be given. These force a fundamental reappraisal of the business models — and the recognition that big companies, while holding a number of strategic advantages and economies of scale, were losing the “joined-up thinking” and operational efficiency that smaller organizations naturally enjoy (or need, in order to survive). Hence, asset-centered organization units emerged, the term “asset” having various differing definitions. For instance, some used the oil/gas reservoir as their starting point, along with all associated infrastructure to extract it, while others chose physical infrastructure (platforms) in the first place as the units of business or profit centers.
5. Asset Integrity (AI): Definition The AI for sustainable industrial organization requires a good knowledge of the business, asset condition, operating environment, link between application data and decision-making quality, and integrating distinctive management modules (e.g., resource, safety, risk, environmental, project, financial, operations and maintenance management, etc.) for the delivery of results (Liyanage, 2003b). The crucial task in AM is to understand clearly what is important to the business and how to deliver them from assets, make the underlying objectives of an asset clear to everyone and to ensure that they are in the vicinity of the frame for business objectives, minimizing inherent clashes (Hammond and Jones, 2000). The following definitions exemplify the former explanation and redeem for understanding the essence of AI in a broad sense. • “Sum of all those activities that result in appropriate infrastructure for the costefficient delivery of service, since it intends to match infrastructure resource planning and investment with delivery objectives.” (Rondeau, 1999). • “AI is a continuous process of knowledge and experience applied throughout the lifecycle to manage the risk of failures and events in design, construction, and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
335
during operation of facilities to ensure optimal production without compromising safety, health and environmental requirements.”, (Pirie and Ostny 2007). • “The maintenance of fitness for purpose of offshore structures, process plant, connected pipelines and risers, wells and wellheads, drilling and well intervention facilities, and safety systems.” — International Regulators Forum (Richardson, 2007). Primarily, AI is for making sure the assets function effectively and efficiently while safeguarding life and the environment.AssuringAI means that risks to personnel are controlled and minimized, which at the same time ensures stable industrial operations. “If you are going to achieve excellence in big things, you develop the habit in little matters. Excellence is not an exception, it’s a prevailing attitude.” — General Colin Powell (Notes, 2007). To understand the concept AI broadly, it can be divided into three segments: design integrity (DI), operational integrity (OI), and technical integrity (TI). For the sake of clarity these terms can be defined as follows: • DI: “(‘Assure design for safe operations’): Assurance that facilities are designed in accordance with governing standards and meet specified operating requirements.” (Pirie and Ostny, 2007). • OI: “(‘Keep it running’): Appropriate knowledge, experience, manning, competence and decision-making data to operate the plant as intended throughout its lifecycle.” (Pirie and Ostny, 2007). • TI: “(‘Keep it in’): Appropriate work processes for Maintenance and inspection systems and data management to keep the operations available.” (PIRIE, 2007). Table 2 provides general issues related to each. 6. Case: Flaring System within Petroleum Asset Through the Lens of AI In 1980s, the Norwegian government had decided to introduce a CO2 tax, greatly focusing attention on excessive flaring within Norwegian sector of the North Sea (ARGO, 2004; PUBS, 2005). For example, within the UK offshore operational area, flaring accounts for some 20% of CO2 emissions from the offshore oil production industry with 71% from power generation leading to environmental, safety, and maintenance challenges in relations to flare ignition/combustion system (UKOOA, 2000). However due to the taxes, the expenditure of retrofit on existing offshore assets often calls into question the benefit of full zero flare solutions. As a result of that, many operators have influenced to search flare gas recovery or zero flaring as a requirement for new assets. On the contrary for all existing assets, it is important to re-examine the total benefits of complete installation of zero flare systems, or to consider partial installation based on the benefits (ARGO, 2004) that can see through TBL lenses. However,
March 15, 2010
14:45
336
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Table 2. General Issues Pertaining to DI, TI, and OI in Relation to Industrial Applications. Component of AI DI
OI
TI
General issues in industry today • Long tradition in the industry to design safety barriers according to regulations and recognized international standards, followed by in-depth verification programs in design and construction. • Struggle with transfer of data and knowledge from construction to operation. • Struggle with change management control. • Overall work process for maintenance/inspection planning and execution well-established. • Inadequate integration of maintenance and safety work processes. • Work processes for analysis of experience data and continuous improvement not in place. • Traditionally, operators and maintenance disciplines technically competent, but lack analytical skills required for application of more systematic and advanced decision models. • Struggle with knowledge management. • Maintenance management systems (MMS) are in place. • Varying quality of planning and prioritization; expert judgment, rather than based on risk models and in-house experience data. • Reporting of failure information generally poor for optimization purposes.
the experience in managing assets in Norwegian petroleum industry clearly demonstrated that the financial reward and the developed technology today is available for the entire industry to use. Charlie Moore, Director of Engineering Stargate Inc., stated: “we moved to tools that are light years ahead of what we were using. Our previous tools were slow and not rules-driven. We had many errors escaping into the field which hurt our reputation. Now with these new tools, we are delivering more robust products at less cost.” (Aberdeen Group, 2007). Through AI lenses, achieving zero flaring system straightforwardly is related to DI. One of the main areas of real benefit for operators in petroleum industry using zero flaring systems is the major reduction in maintenance requirements of the flare system. Failure of components within the flare system will at best cause safety concern, and at worst ensure an unexpected facility shutdown. If uncorrected, damage to the flare system can impact OI of the process facility. Apart from that, the major cause of component failure on the flare system will also lead to TI issues such as low flow-rate flaring when the flame wafts about in the breeze interrupting directly on the flare tip itself, the pilots, and any other equipment on the flare deck. Any flare system, and particularly one located on vertical towers, is susceptible to dropped objects. There is usually a relatively small
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
337
top flare deck and if any part of the flare system were to fail, there will be a risk of dropped objects. Typical dropped objects may include areas of the outer wind fence on the flare itself damaged by heat load and battered by wind (ARGO, 2004). Other items susceptible to breaking away from the flare will include elements of the pilot and ignition assembly. There is plenty of experience within the industry as a whole, of pilot nozzles failing, or parts of the ignition rods breaking free. In more extreme cases, parts of the flare can fall, causing health and safety issues. Finally, the flare deck will require periodic repair especially if upset liquid was flared during the process, resulting in very high heat loads on the flare deck. In addition, the maintainance of AI is not just about integrity of equipment but also about developing and maintaining integrated systems and work process and ensuring the competence of individuals and teams for creating and sustaining a world-class operating culture supported by a few clear and well-understood values and behaviors (BP, 2004). AI is the sum of the features, and offers a way of assessing the net merit of any new activity (which improves some features, at the expense of others) while assuring the asset function effectively and efficiently while safeguarding life and the environment. Figure 7 illustrates how AI declines with time. Table 3 illustrates varying scenarios of AI in different periods of an industrial asset (Ratnayake and Liyanage, 2009a). Figure 8 illustrates some general example elements for assessing and how AI can be improved. 7. Roles of Integrity Management (IM) Integrity is defined as “. . . unimpaired condition; soundness; strict adherence to a code . . . , state of being complete or undivided,” and management as “judicious use of means to accomplish an end” (Webster, 2008). The definition for IM is field-specific. For an instance, in oil and gas production operations it is defined
Asset integrity
A
D
C B
E
F Time
Figure 7. The behavior ofAI under reactive, proactive, and continuous improvements conditions during the life-cycle of an asset (Ratnayake and Liyanage, 2009a).
March 15, 2010
14:45
338
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Table 3. Varying scenarios of AI in different time periods. Stretch
Scenario
AB
Inherent decline with “living” assets
BC BF
Improvement measures Problems in AI management
CD
AI management with continuous improvement
CE
AI management following “absolute minimum” solutions
Examples • “Living” assets are subjected to variations which in turn have effects on the condition of equipments leading to technical and operational problems. • Design, technical, and operational improvements. • Unexpected problems can occur due to poorly defined AI practice, for instance, relaxation of procedures, changes in operating conditions, lack of competence and training, etc. • Change maintenance practices proactively through monitoring. • Continuous revision of maintenance procedures. • Redirect capital expenditures more efficiently, based on planned replacement of equipment rather than replacement following a failure. • Better data management process capture and institutionalize the expert system knowledge with more experienced personnel to elevate the level of all technical people. • Strict conditions on budget and resource allocations. • Poorly managed/changed processes.
as a “continuous assessment process applied throughout design, maintenance, and operations to proactively assure facilities are managed safely.” (Ratnayake and Liyanage 2009; Ciaraldi, 2005). IM requires industrial assets to assess, evaluate, repair and validate, through comprehensive analysis, the safety, reliability, and security of their facilities in high-consequence areas to better protect society and the environment (FERC, 2005). BP defines it as “the application of qualified standards, by competent people, using appropriate processes and procedures throughout the plants life cycle, from design through to decommissioning.” (Ratnayake and Liyanage, 2008; BP, 2004). Corrosion, metallurgy, inspections, non-destructive evaluation (NDE) readily come to mind as fields where IM plays an important role. But health, safety, environmental, and quality (HSE&Q) issues in parallel with financial and stakeholder expectations (that is, people, competency assurance, procedures, emergency response, incident management, etc.) also carry significant IM roles. The catastrophe history from Grangemouth (1987), Piper Alpha (1988), Longford (1988), P-36
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
339
Asset integrity improvement
Asset Integrity Management: Operationalizing Sustainability Concerns
E1
E2
E3
E4
E5
Example elements to improve asset integrity
Figure 8. Asset integrity enhancements. E1, implementation of technical (hardware) recommendations; E2, training and competence assessment of integrity of personnel; E3, implementation of operation, engineering, maintenance procedures and inspection programs; E4, development and implementation of integrity operating windows; E5, effective implementation of risk management and management of change. (Adapted from Shell, 2008.)
(2001), Skikda (2004), Texas refinery incident (2005) and others have necessitated many IM practitioners to conclude a more holistic approach for managing industrial assets. Effective IM combines many activities, skills, and processes in a systematic way and from the very top of the organization to the bottom and vice versa. Hence the managing of activities, skill sets, and processes can be called “IM” (Ciaraldi, 2005), which has been part of mainly hazardous industries (for instance nuclear installations, oil and gas facilities, chemical processing plants, etc.) for at least the latter half of the 20th century and perhaps even before. This is because each aspect of IM can be considered as an obstacle to the consequences of a major incident, the prevention and mitigation of which are its main objectives (Saunders and O’Sullivan, 2007). Figure 9 shows how imperfect filters (imperfect decision bases) can result in major accidents. Historically, the barriers and escalation control layers were defined by standards, regulations, engineering codes (or practices), etc. Though effective IM uses all these, with rocketing stakeholder pressures, they must be refined from time to time and must select correct priorities based on the ideas along whole cross-section of an organization. The following are some of IM buzz words appearing in petroleum industry: Well Integrity Management (WIM), Structural Integrity Management (SIM), Pipeline
March 15, 2010
340
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Figure 9. Visualization of major accident event in relation to operating assets and the prioritization process.
Integrity Management (PIM), Pressure Equipment Integrity Management (PEI), Rotating Equipment Integrity Management (REIM), Lifting Equipment Integrity Management (LEIM), Civil Work Integrity Management (CWIM), etc. However, the industry has realized the need to organise all the fragmented IM systems into single entity, namely, the AIM system (Alawai et al., 2006). 8. Asset Integrity Management AIM is a complete and fully-integrated company strategy directed toward optimizing efficiency, thereby maximizing the profit and sustainable return from operating assets (Montgomery and Serratella, 2002). This is one of many definitions used to describe AIM, while the definition varies somewhat depending on the industry. The former definition constitutes the basis for the significant performance improvement opportunities available to almost every company in every industrial sector. If we broaden the scope to describe not just infrastructure assets but “any” core owned elements of significant value to the company (such as good reputation, licenses, workforce capabilities, experience and knowledge, data, intellectual property, etc.), then the AIM represents the sustained best mix of • Asset “care” (i.e. maintenance and risk management) • Asset “exploitation” (i.e. “use” of the asset to meet some corporate objective and/or achieve some performance benefit)
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
341
Physical assets IM
DIM Design for production/ manufacturing
Design for maintenance
AIM OIM
TIM
Plant operations, inspections, maintenance, etc.
Figure 10. The concept of AIM. (From Ratnayake and Liyanage, 2009.)
The care and exploitation of physical assets, the whole problem is divided into manageable subsections. That is, managing AI consists of Design Integrity Management (DIM), Operational Integrity Management (OIM), and Technical Integrity Management (TIM) as suggested by Fig. 10. For instance, the financial services sector uses the term to describe — finding the right combination of asset “value retention” (capital value) and “exploitation” (yield) over the required time horizon. Physical assets can also be protected and well cared-for, with high capital security (condition) but lower immediate returns (profit), quite similar to different bank accounts or investment options. On the other hand they can be “sweated” for better short-term gains sacrificing long-term gains putting them at the risk and condition cost of future usefulness/value. AIM for sustainability involves trying to juggle the conflicting objectives — milking the cow today but also caring for it so that it can be milked and/or sold well in the future, by definition, sustaining the “license-to-operate.” In this context, AIM for “sustainability” is the phrase for the resolution of tradeoffs and compromise requirements, but few really understand what it means in practice. Sustainability focusing involves “equality of impact, pressure or achievement” and on the other hand involves trying to find the most attractive “combination” (sum) of conflicting elements. This may involve several options such as a lot of cost at very little risk, vice versa, or any other combination — just so long as the net total impact is the best that can be achieved. Figure 11 illustrates how the sustainability and capital value varies with respect to yield (based on asset exploitation level).
March 15, 2010
342
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Business impact
Net present value
Optimum focus for AIM Resultant business impact Yield Balance
Sustainability and capital value
Level of asset exploitation
Figure 11. The concept of “sustainability” in AIM context. (Adapted from Bower et al., 2001.)
Managers in the context of AM should not organize themselves into groups of functional specialization, as the whole picture will not surface. This is because uncertainties about asset behavior, future requirements, performance values, costs, risks all contribute to make the lines in “fuzzy” nature. For instance, departments are set up to design/build the assets in relation “engineering,” exploit them in relation to “operations” or “production,” or to care for them in relation to “maintenance.” Only the top level of an industrial organization has the responsibility for optimizing the combination — unless “asset-based management” has been adopted properly along a cross-section of an organization. Organizing themselves by “activity type” may be administratively convenient for the managers, but it loses sight of the larger sustainability perspective. The slogan “Every one is optimizing. Don’t be left out!” (TEADPS, 2009) is visualized in relation to AIM for sustainability concerns formula: AIM = f (economical, environmental, and societal issues) and as shown in Fig. 12. AIM comes into the equation mainly because of the many improvement methodologies and techniques available to improve plant reliability and availability, such as Reliability-Centered Maintenance (RCM), Planned Maintenance Optimization (PMO), Risk-Based Inspection (RBI), Total Productive Maintenance (TPM), Total Quality Management (TQM), Six Sigma, etc. How can they be used in an optimum way to satisfy the environmental, health and, safety (societal) concerns while making profit? Further all of these techniques have significance limitations when dealing with high-consequence, low-probability events. Those are the events that have a potentially catastrophic impact on industrial AI and consequently, the sustainability of an organization.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
343
Economy
Optimum AIM with sustainability concerns.
RCM, PMO, RBI, TPM, TQM, 6 ...etc.
En vi ro nm en
t
Society
Figure 12.
Optimization of AIM concerns through sustainability lenses.
9. Present Grounds for Pushing Industry Toward Sustainability Focusing and Searching for AIM “Brent Spar,” one of the oil rigs that belongs to Shell in North Sea, UK sector, provides lessons on how as time progresses, the measures surrounding an industrial asset may increase and as it does, the room for action shrinks. Shell was compelled to abandon its plans to dispose of Brent Spar at sea (while continuing to stand by its claim that was the safest option) due to public and political opposition in northern Europe (including some physical attacks and an arson attack on a service station in Germany), from both an environmental and an industrial health and safety perspective. The incident came about as a result of Greenpeace organizing a worldwide, high-profile media campaign against Shell’s plan for disposal in deep Atlantic waters at North Fenni Ridge (approximately 250 km from the west coast of Scotland, at a depth of around 2.5 km). Thousands of people stopped buying their petrol at Shell outlets, although Greenpeace never called for a boycott of Shell service stations (Anderson, 2005). The final cost of the Brent Spar operation to Shell was between £60 M and £100 M, when loss of sales (“Shell’s retail sales in Germany and other European countries had fallen by 30%”) was considered (Melchett, 1995). The incident further exemplifies how decisions made for short-term benefit have their economic, environmental, and societal repercussions. Balancing sustainability concerns with business performance can put some owners and operators in a precarious situation. The dilemma places them in a “gray zone” — juggling risks and reward. But there has been no better time in the industry to feel confident about managing AI than today. Figure 13 illustrates how AIM comes into the surface through sustainability concerns while the degree of
14:45
Room for action and number of constraints
344
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake Number of constraints
Asset performance Economical
Actions
Reactions
Room for action
Degree of uncertainty
March 15, 2010
Environmental
Societal
Focus
AIM
Time
Figure 13. The reasons for pushing an industrial organization for sustainability concerns and role of AIM. (Adapted from Ratnayake and Liyanage, 2009.)
uncertainty is rocketing due to the increased number of constraints and shrinking room for action for managing an industrial asset. The case is “Piper Alpha” incident, which directly gives further evidence to give you an idea about how bad AIM affects economical, health, safety, and environmental factors of an organization. It was a North Sea oil production platform operated by Occidental Petroleum (Caledonia) Ltd. The platform began production in 1976, first as an oil platform and then later converted to gas production. An explosion and resulting fire destroyed it on July 6, 1988, killing 167 men. Total insured loss was about £1.7 billion (US$ 3.4 billion). To date, it is the world’s worst offshore oil disaster in terms of both lives lost and impact on the industry. The enquiry was critical of Piper Alpha’s operator, Occidental, which was found guilty of having inadequate “maintenance” and “safety” procedures. After the Texas City Refinery incident (which killed 15 and injured over 170 people); the Baker panel report was released on January 2007. The principal finding was that BP management had not distinguished between “occupational safety” (i.e., slips-trips-and-falls, driving safety, etc.) vs. “process safety” (i.e., design for safety, hazard analysis, material verification, equipment maintenance, process upset reporting, etc.). The metrics, incentives, and management systems at BP focused on measuring and managing occupational safety while ignoring process safety. This incident highlights well how the numbers of measures are rocketing and the significance of AIM through sustainability lenses. Baker panel concludes that BP confused improving trends in occupational safety statistics for a general improvement in all types of safety. The social justice value, for instance child labor, exposure to dangerous chemicals, sexual harassment, verbal abuse, etc., directly influences industrial asset performance. The case on United Students Against Sweatshops, to quote Anderson:
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
345
“Nike’s failure to manage its supplier subcontractors lead to a boycott based on social justice values” (Anderson, 2005) led to Nike’s stock price and revenues dropped. As noted by Eric Brakken, organizer for United Students Against Sweatshops: “What Nike did is important, it blows open the whole notion that other companies are putting forward that they can’t make such disclosures. Disclosure is important because it allows us to talk to people in these overseas communitiesreligious leaders, human rights leaders — who are they able to go and examine and verify working conditions” (as seen in Anderson, 2005) and other former examples illustrate well how the room for action is shrinking while number of measures surfacing to an industrial organization. All cases emphasize the need for pushing industry toward sustainability focusing and searching for AIM. 10. AIM: Who Needs It? It is easy to see the impact of poor IM. Senior executives are very much aware of the effect on H, S, and Es that result from major safety and environmental incidents and consequential damage to corporate reputation and value. Past efforts at improving occupational H, S, and E have reduced injuries and losses at a personal safety level measurably. On the contrary, the Baker Report after the Texas City accident states: “Leadership not setting the process safety “tone at the top,” nor providing effective leadership or cascading expectations or core values to make effective process safety happen” (Baker, 2007). The industry is now turning its attention to reducing major incident risk through a properly implemented AIM system which enhances process safety. Oil majors have each developed powerful operational excellence and AIM philosophies that, if well implemented, will reduce the risk significantly. Here, the role is to implement and deliver AI throughout the lifecycle of the asset and measure the benefits gained. 10.1. The Need Industry’s historical perception of AIM is not appropriate for today’s current needs. Driven by the global hunger for energy, the industry is moving into ever more extreme environments, utilizing new technology, and extending the life of aging plant. Moreover, the effects of integrity failure and associated publicity surrounding catastrophic events have engaged shareholders, senior managers, regulators, and the public in the debate over managing the integrity of its assets. The primary reason for improving the way we manage AI is therefore risk assessment and risk reduction in all asset-related activities from the design to operations and maintenance activities. AIM process with sustainability focusing is a proven, life-cycle management system that delivers measurable risk reduction and cost benefits. Its fundamental principle is . . . “To ensure that critical elements remain fit-for-purpose throughout the lifecycle of the asset and at an optimum cost.”
March 15, 2010
346
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
AIM process is a simple, logical, and holistic approach that brings innovative methodology to both new-build and existing assets. AIM process defines and ranks the components that are safety/environment/production/business critical with multiple criteria decision-making (MCDM) approach. The asset performance is compared with the available standard in relation to each critical component, along with requirements needed to assure and verify performance throughout the lifecycle of the asset. Previously mentioned, performance assurance and verification focus effort on managing the parts of the asset that matter, leading to informed decisions, a reduced risk/cost profile and documented evidence of good asset management. This process helps AI by optimizing operability, maintainability, inspectability, and constructability as a “built-in” feature of the design. Similarly, it provides support for an instance risk-based inspection, reliability-centered maintenance techniques by maximizing the effectiveness of maintenance, inspection on critical equipment during the operational life of the asset, etc. AIM process facilitates compliance with corporate and regulatory standards by demonstrating that critical items are identified and their performance is being managed in a documented manner. 11. Industrial Challenge With the present grounds, sustainability in industrial assets is an up-and-coming requirement. The question is how core AIM processes should be verified to ensure the quality and compliance of performance in relation to sustainability concerns and to prioritize business goals within sustainability requirements. Figure 14 illustrates the problem environment in the current AIM jargon. Organizational requirements and processes for AIM
OI
TI
DI Priority and significance levels
Economic
Operator company
Priority and significance levels
Environmental
Sustainability performance
Societal
Figure 14. Verification of AIM performance with TSP toward sustainability. (Ratnayake and Liyanage, 2008.)
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
347
The above problem can be resolved by evaluating organizational processes simultaneously with the information from DI, TI, and OI using a decision support system (DSS) with involvement of the representatives of the stakeholders. This chapter devises an “operational knowledge tool” that can help determine how AIM objectives can be given priorities in relation to sustainability concerns. The purpose of this chapter is to present and explore this knowledge tool to assist researchers and technology policymakers in structuring and making decisions in the light of sustainable performance goals. 12. Methodology for Addressing the Challenge The data needed have been obtained from a confidential report of a joint industry project focusing on AIM solutions for a major industrial facility. A joint industry project started in 2006 to ensure AI of gas processing installations focusing on establishing, maintaining, and continually improving AIM. The study focused on addressing a comprehensive verification process focusing on how company’s AIM process should be governed. First, a set of study data were drawn from this study in principal, and also from thorough exploratory studies into different incidents occurred in past. The required experimental data were gathered through interviews, discussions, informal conversations with experts along with various forms of AIM decision-related business documentations. For synthesizing a whole set of data for comprehensive verification process and alternatively to assess how the company has given priorities for different aspects of AIM focusing on TBL sustainability, a model is proposed using the concepts extracted from the so-called analytic hierachy process (AHP), which has been developed by Thomas Saaty of the Wharton School of Business (Saaty, 2005). Ernest Forman recommends AHP as a useful method for synthesizing of data, experience, insight, and intuition in a logical and thorough way (Forman and Selly, 2001). AHP provides an excellent backbone for gathering the entire expert knowledge circulating within the organization into single umbrella, thereby, while providing higher democracy, leading to reliable decision structure which will provide a way out for most of lagging AIM issues. 13. Framework Developed Synthesizing the expertise coming from different layers based on goal objective can be done with the help of AHP along a whole consecution of an organization. With this analysis, it is expected to measure the present organizational awareness or the alignment with respect to desired out put. Figure 15 illustrates how the core AIM process: DI, TI and OI, can be further dispersed into strategic, tactic and ultimately operative targets and the way that AHP can be incorporated. However, it is not enough to have a written system; senior executives must be confident that their management teams are implementing the system effectively.
March 15, 2010
348
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Core process: AIM Design integrity
Technical integrity
Operational integrity
Strategic AIM target: Sustainability in industrial asset operations
Goal
Tactic target: Maintaining the agreed AI level
Criteria
Alternatives
Operative target: Maintaining TBL requirements AHP analysis
AHP analysis
AHP analysis
Figure 15. AIM with AHP. (From Ratnayake and Liyanage, 2008.)
Validation Preliminary review by relevant authorities within the business units
Data collection
Gap analysis
Prioritized list of improvements for managing the asset
AHP Measuring asset performance
Figure 16. The cycle for measuring asset performance with AHP for assessment and prioritizing the decisions.
The method suggested here combines observations made at the asset/facility, input from the staff at different levels and equipment documentation. Basically the analysis relying on expert knowledge received from personnel involved at different levels. The expert knowledge is derived from data, experiences, intuitions, and intentions. The process of synthesizing them in a logical and thorough way is done by AHP analysis (Saaty, 2005). Figure 16 shows the basic cycle for measuring the asset performance for managing an industrial asset. In terms of existing assets, AIM system includes an authoritative review process that assesses and reports on the current condition of critical elements and compares the current AI system with corporate and regulatory requirements as well as industry best practice. This can be achieved by AHP process while recognizing the priorities along whole cross section of an organization while splitting whole picture into three subsets within AIM as explained before DI, TI, and OI. This generates a gap analysis and an initial work-scope and how to incorporate budget for future work.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
349
Alternatively, AIM system also facilitates the necessary communication among operations, maintenance, production, and engineering teams over the hierarchical structure explained in AHP method. The information that is exchanged at regular meetings/interviews ensures that the integrity effects resulting from operational changes to the asset are properly assessed and fed back to management. This will ensure Ohno’s philosophy: “making a factory operate for the company just like the human body operates for an individual,” where the autonomous nervous system responds even when one is asleep (Ohno, 1988). With AHP process data, experiences, insights, and intuitions would be synthesized thoroughly and logical way leading to a feedback system with improvement cycle. This continually updates AI status and trending, leading to improved operational excellence. The level of expensive unplanned maintenance is a good indicator of TI effectiveness. For instance, with the above process, one can expect measurable reductions in unplanned maintenance, maintenance strategy selection (e.g., corrective, preventive, opportunistic, condition-based, predictive, etc.), etc. 14. Measuring Desired OI Figure 17 demonstrates how to operationalize AIM framework showed in Fig. 15. This is based on the case study done with one of gas operator company to assess Desired OI
Level 1. Main goal
Product and process deviations, and management of change
………………..
………………..
Procedures and routines
Communication and reporting
Change requirement specifications
……………..
…………….
……………...
……………..
………………..
………………….
………………...
…………………….. …………………..
Composition and volume control CO 2 content
Calibration of instruments
Systems for quality control
Environmental
…………...
Product quality and volume control …………………………….
Level 5.Composite scenario
………………………………..
Financial
……………………….
………………………..
……………………………..
Level 4. Contrasting scenarios
…………...
……………...
…………...
Safe job analysis (SJA)
Work permit system
……………………………...
………………………………..
Total work load
Overall and local risk level
Level 3. Alternatives
Preparations for manual operations
Simultaneous operations and project execution
Level 2. Criteria
Societal
Composition
Figure 17. Illustrative example of hierarchical structure for obtaining weights for OI. (Ratnayake and Liyanage, 2009.)
March 15, 2010
350
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Table 4. The Pairwise Combination Scale. Intensity
Definition
Explanation
1 3 5 7 9 2, 4, 6, 8
Equal importance Moderate importance Essential or strong importance Demonstrated importance Extreme importance Intermediate values
Two activities contribute equally to the object Slightly favors one over another Strongly favors one over another Dominance of the demonstrated in practice Evidence favoring one over another of highest possible order of affirmation when compromise is needed
OI and hence sustainability concerns through prioritization: “have given” and “to be given.” The scale of relative importance for pairwise comparison as developed by Saaty is shown in Table 4 (Saaty, 2005). The judgment of the decision maker is then used to assign values from the pairwise combination scale (see Table 4) to each main criterion for a “level II” analysis. A pairwise comparison matrix (as shown in Fig. 18 below) along with AHP method or a software program will be utilized for analysis. The most important part of the asset manager is to build the correct hierarchical diagram, which is suited for the particular application. The second most important aspect is to carry out pairwise comparison with right expert personnel. The whole analysis process is explained in Fig. 19 and once the pairwise matrix is built of each layer, the rest of the analysis can be done with a software program. William (2007) provides a comprehensive overview of how integrated AHP and its applications are evolved. Chatzimouratidis and Pilavachi (2007), Wedding and Brown (2007), and Sirikrai and Tang (2006) are some of other examples how different aspects within AIM context can address with AHP. Along with AHP analysis, the areas lagging would be surfaced for managing purpose.
System for quality control
Financial
Environmental
Financial
Environmental
1
X (X:1−9)
Societal
Societal
1/X
1
1
Figure 18.
Step 1. Development of pairwise comparison matrix. Step 2. Assigning a score based on “how much more strongly does this element (or activity) possess or contribute to, dominate, influence, satisfy, or benefit the property than does the element with which it is being compared ?” (Saaty, 2005 ). Step 3. A reciprocal relationship exits for all comparisons. Step 4. When comparing a factor to itself, the relationship will always be one.
Illustrative example of comparison matrix.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
351
Problem: AIM-related Issue
2. Define threshold levels
1. List alternatives
3. Determine acceptable alternatives levels
4. Define criteria 5. Start with decision hierarchy
6. Compare alternatives pairwise
7. Compare criteria pairwise
Relative priorities of alternatives
Importance of criteria
8. Calculate overall priorities of alternatives 9. Sensitivity analysis Advice: AI compliance for TBL sustainability considerations
Figure 19.
Nine phases of AHP approach.
15. Handling TI-Related Issues for AIM: Selecting an Optimum Maintenance Schedule The same approach can be used for TI and DI management-related issues. For instance, optimum maintenance strategy selection is one of the major issues related to TI as manufacturing firms face great pressure to reduce their production costs continuously while addressing sustainability issues. One of the main expenditure items for these firms is maintenance cost which can reach 15%–70% of production costs, varying according to the type of industry (Bevilacqua and Braglia, 2000). The amount of money spent on maintenance in a selected group of companies is estimated to be about 600 billion dollars in 1989 (Wireman, 1990, cited by Chan et al., 2005). On the other hand, maintenance plays an important role in keeping availability and reliability levels, product quality, and safety requirements. As indicated by Mobley (2002), one-third of all maintenance costs are wasted as a result of unnecessary or improper maintenance activities. Moreover, the role of maintenance is changing from a “necessary evil” to a “profit contributor” and toward a “partner” of companies to achieve world-class competitiveness (Waeyenbergh and Pintelon, 2002).
March 15, 2010
352
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Figure 20.
Hierarchy structure of the fuzzy analytic hierarchy process.
The managers are not satisfied with the effect of maintenance activities that depend on corrective maintenance and time-based preventive maintenance mainly, and want to improve their maintenance program without too much increase in investment (Schneider et al., 2005), it is more preferable for them to choose the best mix of maintenance strategies than to make use of the most advanced maintenance strategy for all production facilities to improve the return on investment through health, safety, and environmental concerns. In this sense, AHP with the proposed prioritization method is suitable to the selection of maintenance strategies. This is done by interviewing the maintenance staff and managers. The AHP hierarchy scheme shown in Fig. 20 is constructed for one of redevelopment projects related to oil and gas operations for evaluating optimum maintenance schedule of oil rig located in North Sea. Then, the selection of the optimum maintenance strategy based on industrial asset was done following the AHP process proposed in Fig. 15. 16. Handling DI-Related Issues for AIM: Selecting an Optimum Design Project Mix With fast-tract projects now the norm, project teams need to make sure that their designs are fully compliant with all applicable regulatory and class requirements. Not doing so is a guarantee of later problems. Changes during the design phase obviously translate to cost savings after steel is cut if modifications are required at that time. Industrial personnel know how to design and operate lots of stuff with fast evolving technology. But for instance, the gas operator company’s authors are
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns Select optimum design project
Financial
Environmental
Project 1
Figure 21.
Goal Strategic importance
Risk
Societal
Project 2
353
Return on investment
Project n
Criteria
Image
Subcriteria
Alternative
Selecting optimum project mix to implement for DI management.
working at present: having around 300 projects to implement before 2011, imposed by stakeholders for instance government, joint venture company, etc. necessitates finding out best mix of projects. Figure 21 illustrates the structure developed for gas Operator Company’s AIM solution, focusing on DI management. Then, the selection of the optimum design project mix based on industrial asset performance requirements was done using the AHP process proposed in Fig. 15. Followings are some of implications about the previously-mentioned approach: as indicated by the team of senior managers attached to gas operator company responsible for OI, “. . . we do not have any mechanism for synthesizing ideas along a cross section of the organization and we do make decisions on ad hoc basis . . . believe this process would enhance our decision-making process particularly focusing on AI issues . . . ” and as per a senior manager responsible for DI issues and services for managing risks “. . . many O&G service providing companies do not have a method to evaluate how extent they are aligned with main objectives of the company . . . in turn sustainability concerns . . . to find out gaps in between how extent the company aligned with desired AI and current AI awareness with respect to sustainability concerns. . . .” (Ratnayake and Liyanage, 2009). 17. Conclusion This chapter has been prepared to develop a methodology for assessment of AIM for sustainability and quantified evaluation method of the industry in general. The information and approach illustrate how AIM-based decision making can be used to improve and optimize the AI using AHP. AHP is primarily to assist the analysis and decision-making processes. It helps organize data, intuitions, intentions and experiences in terms of the goal, criteria, and subcriteria (alternatives) formulation. The proposed AHP decision model also provides an effective means to help determine the effectiveness of AIM adoption over whole crosssection of an organization. It is important for today’s operator companies/plant operator’s/owners/professionals to
March 15, 2010
354
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
realize that different methods for managing assets integrity are needed to make the improvements that corporations require and still comply with regulatory requirements. Many industries are exploring opportunities to integrate the concept of sustainable development into their business operations to achieve economic growth with the assurance of environmental protection in the midst of improved health and safety. The proposed approach focuses on achieving better quality of life for present and future generations. The sustaining ability must not be an ending process and hence is a continuous process. Therefore, future research should be carried out to study how frequent these assessments should be done particularly for OI and TI, as today’s business world is changing due to public demands, technology, and global competition.
References Aberdeen Group (2007). Printed Circuit Board Design Integrity — The Key to Successful PCB Development. Retrived February 2, 2008 from http://www.plmv5.com/ aberdeenPLMWP/. ARGO (2004). Environmental and maintenance challenges in flare ignition and combustion. Onshore and Offshore Business Briefing: Exploration & Production: The Oil and Gas (O&G) Review, Retrieved February 18, 2008 from http://www.touchbriefings.com/pdf/ 951/argo 2 tech.pdf. Alawai, SM, AK Azad and FM Al-Marri (2006). Synergy Between Integrity and HSE, Abu Chabi Marine Operating Co., Society of Petroleum Engineers, SPE-98898. Alguindigue, IE and NL Quist (1997). Asset management: Integrating information form intelligent device to enhance plant performance stratergies. In Textile Industry Division Symposium: Vol. 2, June 24–25, 1997. Anderson, DR (2005). Corporate Survival: The Critical Importance of Sustainability Risk Management. USA: iUniverse, Inc. Baker III, JA (2007). The report of the BP U.S. refineries independent safety review panel. Retrieved February 18, 2008, http://www.bp.com/liveassets/bp internet/globalbp/ globalbp uk english/SP/STAGING/local assets/assets/pdfs/Baker panel report.pdf. Behn, B (2005). On the philosophical and practical: Resistance to Measurement, Public Management Report Vol. 3, No. 3, November 2005. Bevilacqua, M and M Braglia (2000). The analytic hierarchy process applied to maintenance strategy selection. Reliability Engineering and System Safety, 70, 71–83. Boateng, BV, J Azar, E De Jong and GA Yander (1993). Asset recycle management — A Total Approach to Product Design for the Environment, IEEE, 0-7803-0829-8/93. Bower, AJ, GO Scott, Hensman and PA Jones (2001). Protection asset maintenance — What business value, Development in Power System Protection, IEE 2001, No. 479. BP (2004). Integrity Management Optimization Program — 2004. Retrieved February 2, 2008, http://www.oilandgas.org.uk/issues/health/docs/bp.pdf. Carson, R (1962). Silent Spring. New York City: Houghton Mifflin. Chan, FTS, HCW Lau, RWL Ip, HK Chan and S Kong (2005). Implementation of total productive maintenance: A case study. International Journal of Production Economics, 95, 71–94.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
355
Chatzimouratidis, AI and PA Pilavachi (2007). Objective and subjective evaluation of power plants and their non-radioactive emissions using the analytic hierarchy process. Energy Policy, 35, 4027–4038. Ciaraldi, SW (2005). The essence of effective integrity management — People, process and Plant, Society of Petroleum Engineers, SPE-95281. Elkington, J (1997). Cannibals with Forks. Oxford: Capstone Publishing. FERC (2005). Federal Energy Regulatory Commission (FERC). Retrieved February 2, 2008, http://www.ferc.gov/news/news-releases/2005/2005-2/06-30-05-ai05-1000.pdf. Forman, EH and MA Selly (2001). Decision by objectives: How to Convince Others that you are Right. Singapore: World Scientific Publishing Co. Hammond, M and P Jones (2000). Effective AM: the road to effective asset management is paved with a look at how we got here and how we might move on. Maintenance & Asset Management 15(4), 3–8. Hart, SL and MB Milstein (2003). Creating sustainable value. Academy of Management Executive, 56–69. Holliday, C (2001). Sustainable growth, the DuPont way. Harvard Business Review, 79(8), 129–132. Jawahir, IS and PC Wanigaratne (2004). New challenges in developing science-based sustainability principles for next generation product design and manufacture. Proceedings of the 8th International Research/Expert Conference on Trends in the Development of Machinery and Associated Technology, 1–10. Labuschagne, C andAC Brent (2005). Sustainable project life cycle management: The need to integrate life cycles in the manufacturing sector. International Journal of Project Management, 23(2), 159–168. Liyanage, JP and U Kumar (2003). Towards a value based view on operations and maintenance performance management. Journal of Quality in Maintenance Engineering, 9(4), 333–350. Liyanage, JP (2003). Operations and Maintenance Performance in Oil and Gas Production Assets: Theoretical Architecture and Capital Value Theory in Perspective. PhD Thesis, Norwegian University of Science and Technology (NTNU), Norway. Liyanage, JP (2007). Operations and maintenance performance in production and manufacturing assets: The sustainability perspective. Journal of Manufacturing Technology Management, 18, 304–314. Ma, N and L Wang (2006). An integrated study of global competitiveness at firm level: Based on the data of China. Proceedings from PICMET 2006: Technology Management for the Global Future. Mebratu, D (1998). Sustainability and sustainable development: Historical and conceptual review. Environmental Impact Assessment Review, 18, 493–520. Melchett, P (1995). Green for danger. New Scientist, 148, 50–51. Mobley, RK (2002). An Introduction to Predictive Maintenance (2nd Edn.). New York: Elsevier Science. Montgomery, RL and C Serratella (2002). Risk-based maintenance: A now vision for asset integrity management. PVP-Vol. 444, Selected Topics on Aiging Management, Reliability. Safety, and Licenense Renewal ASME 2002, PVP2002–1386. Notes (2007). Reflect, connect, expect. Eastman weekend 2006. http://esm.rochester. edu/pdf/notes/NotesJan2007.pdf p. 13.
March 15, 2010
356
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Ohno, T (1988). Toyota Production System-Beyond Large-scale Production, pp. xi.Toyko, Japan: Diamond. Inc. PASS-55-1 (2004). Asset management Part 1: Specification for the optimized management of physical infrastructure assets, BSI 30th April 2004. PASS 55-2 (2004). Asset management Part 2: Guidelines for the application of PAS 55-1, BSI 30th April 2004. Pirie GAE, and E Ostby (2007). A Global Overview of Offshore Oil & Gas Asset Integrity Issues. Retrived February 2, 2008 from http://www.mms.gov/international/IRF/PDFIRF/Day1-8—-PIRIE.pdf. PUBS (2005). When the government is the landlord. Regional details: Norway. Retrived February 2, 2008 from http://pubs.pembina.org/reports/Regional%20Details Norway.pdf. Ratnayake, RMC and JP Liyanage (2007). Corporate dynamics vs. industrial asset performance: The sustainability challenge. In The International Forum on Engineering Asset Management and Condition Monitoring — Combining — Second World Congress of Engineering Asset Management and Fourth International Conference on Condition Monitoring, UK. Ratnayake, RMC and JP Liyanage (2008). Analytic hierarchy process for multi-criteria performance evaluation: An application to sustainability in oil & gas assets. In 15th International Annual EurOMA Conference. Netherlands: University of Groningen. Ratnayake, RMC and JP Liyanage (2009). Asset integrity management: sustainability in action. International Journal of Sustainable Strategic Management, 1(2), 175–203. Richardson, A, (2007). Asset integrity. Retrived February 2, 2008 from http://www.mms. gov/international/IRF/PDF-IRF/Day1-9—-RICHARDSON.pdf. Rondeau, E. (1999). Integrated asset management for the cost effective delivery of service. Proceedings of Futures in Property and Facility Management International Conference. London: University College London. Saunders, C and TO Sullivan (2007). Integrity management and life extension of flexible pipe, SPE 108982. Saaty, TL (2005). Theory and Applications of the Analytic Network Process: Decision Making with Benefits, Opportunities, Costs, and Risks. RWS Publications. Schneider, J, A Gaul, C Neumann, J Hografer, W Wellbow, M Schwan and A Schnettler (2005). Asset management techniques. In 15th PSCC, Liege, 22–26, August 2005, Session 41, Paper 1. Sheble, GB (2005). Asset management integrating risk management: Head I win, tails I win. In IEEE Power Engineering Society General Meeting, 2005. Shell, (2008). Assessing technical integrity and sustainability. Retrived September 19, 2008 from http://www.shell.com/static// globalsolutionsen/downloads/industries/gas and lng/brochures/fair brochure.pdf. Sirikrai, SB and CSJ Tang (2006). Industrial performance analysis: A multi-criteria model method. In PICMET 2006 Proceedings, 9–13 July. Stiglitz, JE (1979). A neoclassical analysis of the economies of natural resources. In Scarcity and Growth Reconsidered, VK Smith (ed.), pp. 36–66. Washington, DC: Resources for the future. TEADPS, (2009). Technical electives and design project selection (TEADPS) http://www.chemeng.mcmaster.ca/undergraduate/TechElectivesAndDesignProject.pdf [16 September 2008].
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
357
UKOOA (2000). Environmental Report 2000. United Kingdom Offshore Operators Association. Waeyenbergh, G and L Pintelon (2002).A framework for maintenance concept development. International Journal of Production Economics, 77, 299–313. Webster, (2008). Merriam-Webster’s online dictionary. http://www.merriam-webster.com/ dictionary/integrity [4 September 2008]. Wedding, GC and DC Brown, (2007). Measuring site-level success in brownfield redevelopments: a focus on sustainability and green building. Journal of Environmental Management 85, 483–495. Wenzler, I (2005). Development of an asset management strategy for a network utility company: Lessons from a dynamic business simulation approach. Simulation and Gaming, 36(1), 75-90. William, H (2007). Integrated analytic hierarchy process and its applications-A literature review. European Journal of Operational Research, 186, 211–228. Wireman, T (1990). World Class Maintenance Management. NewYork: Industrial Press. World Bank (2008). What is sustainable development? Retrived February 14, 2008 from http://www.worldbank.org/depweb/ english/sd.html. Woodhouse, J (2001). Evolution of asset management. Retrived February 18, 2008 from http://www.plant-maintenance. com/articles/AMbasicintro.pdf.
Biographical Note R. M. Chandima Ratnayake was born in Sri Lanka. He received his B.Sc. degree in Production Engineering and M.Sc. in Manufacturing Engineering from the University of Peradeniya, Sri Lanka. He is presently a Doctoral Research Fellow attached to the Center for Industrial Asset Management (CIAM) and Assistant Professor attached to the Institutt av Konstruksjon og Material Teknologi (IKM), University of Stavanger, Norway. His research interests include performance measurement and management of industrial assets, development and applications of decision analysis tools and techniques in operations management, and change management in oil and gas operations.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
Chapter 15
How to Boost Innovation Culture and Innovators? ANDREA BIKFALVI Department of Business Administration and Product Design (OGEDP), University of Girona, Campus Montilivi, Edifici PI, Av. Llu´ıs Santalo s/n 17071 Girona, Spain
[email protected] JARI JUSSILA∗ , ANU SUOMINEN† and HANNU VANHARANTA‡ Industrial Management and Engineering, Tampere University of Technology at Pori, PL 300, 28101 Pori, Finland ∗
[email protected] †
[email protected] ‡
[email protected] JUSSI KANTOLA Department of Knowledge Service Engineering, KAIST 335, Gwahangno (373-1 Guseong-dong), Yuseong-gu, Daejeon 305–701, Korea
This chapter examines abstract concepts of innovators’ competences and innovation culture. For people to be innovative, both concepts need to be considered. Ontologies provide a way to specify these abstract concepts into such a format that practical applications can be applied in organizations. Self-evaluation of innovation competence and innovation culture in organizations can be conducted by utilizing a fuzzy logic application platform called Evolute. The approach described in this chapter has management implications. The abstract concepts of innovation culture and innovation competence become manageable, which suggests that organizations should be able to get better innovation results. Keywords: Innovation; innovators; culture; ontology.
1. Introduction For people to be innovative, it seems to require a special mindset and environment, additionally according to Ulijn and Brown (2004) not all innovative people are entrepreneurial. Although some work exists on how to create an organization wide culture of innovation and intra- and entrepreneurship, culture and especially the 359
March 15, 2010
360
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
link between culture and innovation has generally not been studied. A variety of reasons might explain this: the broad concept of what we understand as culture, the multitude of links to other sciences — sociology, anthropology, psychology, or the depth of the concept when referring to national culture, corporate culture, or professional culture. From organization’s management point of view there is another difficulty. Management theories are scattered over a wide area of different management disciplines, and therefore it is difficult to get a holistic view of these different and specific management areas and their detailed content, i.e., constructs, concepts, variables, and indicators. For management research, there is a new area that may help to attain this holistic view: object ontology research. Additionally, system science is trying to help solve this dilemma with many different technologies to help holistic perception and understanding in management. This research, based on previous research on management objects ontologies (Kantola, 2005), aims to build up constructs for management purposes in dynamic organizational environment of innovations management from two points view: individual and organizational. The focus of this chapter is on specific interrelated factors of innovation competences and organizational innovation enablers and barriers. The main targets of this research form the two main objectives of study. The first objective is to examine the nature of personal innovation competences through the concept of creative tension (Senge, 1994). The second objective is to investigate the essence of organizational innovation enablers through the concept of proactive vision (AramoImmonen et al., 2005; Kantola et al., 2005; Paajanen et al., 2004). Through the first objective, we expect to identify the competences needed by individuals in order to be innovative. This would provide a better understanding of the potential impact (if any) of personal innovation competences on the level of employees’ innovation. Through the second objective, we expect to identify those innovation enablers and barriers in terms of the whole organization as a more responsive innovation environment. This will provide a better understanding of the management of innovation within organizations. To make all this possible we have created two conceptual models, i.e., ontologies to help analyze the above. After that we have created computer-supported management questionnaires for self-evaluation of those two above-mentioned ontologies. By making the questionnaires dynamic with internet, we have performed test runs with a group of test subjects. After that we have shown the first signs of the ontology building process. In our future research we will further develop those ontologies and expand the testing also in different industry organizations. 2. Conceptual Framework 2.1. National Systems of Innovation The concept of a national system of innovation provides a good starting point in analyzing both innovation and culture. The standard schematic for a country’s
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
361
Macroeconomic and g y context regulatory
Global Innovation networks Knowledge generation, diffusion & use
Product market conditions
Firms’ capabilities & networks Other research bodies
Science y system Supporting institutions
Communication infrastructures Clusters of industries
Regional novation system Inn m
Education and training system
Factor market conditions
National innovation system
National innovation capacity
COUNTRY PERFORMANCE Growth, jobs, competitiveness
Figure 1.
Main actors and linkages in the innovation system (adapted from OECD).
innovation system depicted in Fig. 1 can be revisited. Although schematic and resumed, it provides a holistic picture of the actors and linkages established in a certain innovation system. Still, the core of the system remains illustrated in the center of Fig. 1. Meanwhile, firms, the research and science base, and the supporting institutions represent Etzkowitz and Leydesdorff’s (1998) triple helix model. The interaction of university–industry–government is at the basis of innovation, a process that becomes an endogenous process of “taking the role of the other,” encouraging hybridization among the institutional spheres (Etzkowitz, 2003). More concretely, universities facing their third mission, higher education can no longer avoid experimenting in entrepreneurial areas. Scientists should be able to complement their basic research activities with research having an immediate commercial value contributing this way to local and regional economic development and growth. Competition and turbulent environment are other areas that nowadays make public research institutions resemble each time more to the business area. Furthermore, government acts as a public entrepreneur and venture capitalist in addition to its traditional role of creating an environment conductive to innovation. Recent public schemes go far beyond the increase of R&D investment, designing and promoting complementary “soft” tools for innovation promotion.
March 15, 2010
362
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
However, firms remain to be the backbone of the systems. Their productive, employing, competitive, innovative and growth capabilities put them in the “spotlight.” The environment in which they operate, often characterized by high complexity and low predictability, converts them into special actors. They are the ultimate innovator operating and facing markets and final customers, whose verdict labels their product and/or services with either success or failure. As firms raise their technological level, they move closer to an academic model, engaging in higher levels of training and in sharing of knowledge. All these show, up to some point, overlapping and common areas rather than isolation between these three pillars. Knowledge generation, diffusion, and use later becoming innovation are a common characteristic to all. During the last two decades, innovation and innovation management were among the top priorities on the research agenda of academics, practitioners, and policy makers. Different trends succeeded in its study, reflected by the richness in definitions collected, for example, by Cumming (1998) in his overview on innovation and future challenges. Focus topics passed from technical, industrial, and commercial aspects, through creativity and culture, large firms versus small firm innovation, sources, patterns, standardization, measurement and monitoring, to human aspects and organizational concepts among others. As the study of innovation’s areas of richness would be too ambitious, focusing on few concepts seems a more appropriate option. Therefore in the following sections we give special attention to organizational innovation in general, and its enablers and barriers in particular. 2.2. Co-evolution of Systems The nature of conscious experience at work is a puzzle for the modern knowledge society. Companies, enterprises, groups, and teams emphasize now more and more on the unique value of individuality in a context of organizational excellence and teamwork. These entities also attempt to learn about the individual’s knowledge in terms of his/her own professional competences, as well as the individual’s aspirations and desires to change and improve those competences. Furthermore, many enterprises would like to guide and support employees’ personal growth, development and personal vision in order to improve their core competences according to the competitive pressures of the business world. Based on the employees’ self-evaluation, the gap between personal vision and current reality forms an individual’s creative tension (Senge, 1994). This creative tension is the energy, which can move an individual from the place of current reality towards the reality of his/her own vision. This is one real achievement driver of the enterprise. When the objects are management processes, it is possible to analyze the current management processes from bottom-up perspectives in order to understand how these processes can be improved. Similarly to creative tension, the other real
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
363
achievement driver in organizations is proactive vision (Aramo-Immonen et al., 2005; Kantola et al., 2005; Paajanen et al., 2004). With proactive vision, employees get opportunity to understand the current state of organizational variables and look to the future and give their opinion, i.e., proactive vision about how these processes should be changed. By combining these two important drivers, i.e., creative tension and proactive vision, it is possible to get more information of the organization, in order to develop organization towards higher performance levels. In the following pages this combination is explained and illustrated in more detail. In our previous research, strong emphasis has been put to a new proposed theory and methodology, i.e., co-evolution (Vanharanta et al., 2005). In the theory formulation, the attempt has been to see the current (perceived) reality from different points of view. Additionally it is emphasized that it is important to understand the time dimension and the change processes inside the systems and subsystems. By increasing the different views it is possible to increase the information variety in human brains and that way decrease the errors to perceive the current reality. From human point of view, it is therefore important to understand both our internal world as well as the external environment in which we live. Co-evolution applied towards an internal view (introspection of own properties or characteristics) extends our ability to evaluate and develop simultaneously different personal characteristics. Co-evolution focused on the external world and different external processes provides a possibility to frame, categorize, conceptualize, understand, and perceive the current reality in a diversified way. From the organizational point of view, the co-evolutionary process viewpoint helps us to identify the need for a change, both in people as well as management processes. 2.2.1. Co-evolution in human performance In order to illustrate an application of the co-evolutionary management paradigm in the human resource management area, Beer (1994) has provided the following relationships, by defining first the levels of human achievement: • Actuality: is what people manage to do now, with existing resources, under existing constraints. • Capability: is what people could achieve now, if really worked at it, with existing resources and under existing constraints. • Potentiality: is what people might be doing by developing their resources and removing constraints, although still operating within the bounds of what is already known to be feasible. Furthermore, the important indices are as follows (Beer, 1994): • Latency: the ratio of potentiality and capability • Productivity: the ratio of capability and actuality
March 15, 2010
364
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al. Potentiality
÷
Latency
Capability
x ÷
Performance
÷
Productivity
Actuality
Figure 2.
Measures of human performance (cf. Beer, 1994; Jackson, 2003).
• Performance: the ratio of potentiality and actuality and also the product of latency and productivity. In the above framework, an application of the co-evolutionary paradigm would lead to the most desired outcomes. First, it can be observed that the above equations indicate the importance of keeping the ratios up (high values) in order to increase the overall human performance. If this potentiality (Fig. 2) is carried out even further, the personal level of the future state a person is targeting towards should be found out, i.e., the individual’s creative tension. On the current level (actuality), it is important to know what a person manages to do now, i.e., how she/he performs at the present and what are the constraints of such performance? The capability or the capabilities to do something is the best ability or the best qualities that the person could exhibit now. The human competence, in turn, is the ability of doing something well and effectively in the immediate future (expanding on the potentiality), i.e., capability for active use. This also gives reasoning to the importance of time in the overall equation (see Fig. 2). 2.2.2. Co-evolution in business performance Another example of the application of the co-evolutionary management paradigm (Vanharanta et al., 2005) can be illustrated by using the concept of productivity in company performance calculations (see Fig. 3). The all-important operational attribute influencing overall company performance is productivity (cf. Kay, 1993; Kidd, 1985). Capital productivity indicates how much capital is invested in relation to addedvalue operations in the company, and market productivity indicates how much profit is yielded in relation to all added-value activities. Capital productivity is addedvalue divided by total capital (total assets), and market productivity is operating profit divided by added-value. Added-value is the market price of products and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
365
Profit
÷
Market productivity
Added-value
x ÷
Return of total assets
÷
Capital productivity
Total assets
Figure 3.
Measures of capital performance (cf. Kay, 1993; Kidd, 1985).
services sold less the market costs of purchased materials (or services) contained with them (cf. Kay, 1993; Kidd, 1985). The same kind of performance patterns can be seen in Fig. 3 as in the human performance illustration in Fig. 2. Similarly, in this example, it is important to keep the ratios high to assure goods results, i.e., the overall profitability performance. However, before the ratios can be changed, the company’s position (situation) at present has to be found out. After that, by understanding the present position (situation) of the company, it is possible to provide new instructions of the means to increase the overall return on total assets through important constructs, concepts, and variables. In real life, the notions illustrated in these two figures (Figs. 2 and 3) have to be understood and utilized simultaneously and concurrently, so that the capital profitability as well as human performance at present and in the immediate future can be comprehended. The equations are similar, giving the asymptotic curves. By combining the information in these equations, the possible new space can be determined where both these concepts can be handled simultaneously. What can be seen are the relationships that are important in order to change those ratios. It is the co-evolutionary way of thinking related to the two equations (which cannot be directly combined), that lead to the overall performance of financial and intellectual capital, i.e., the market value (cf. Edvinsson and Malone, 1997). From the financial point of view, it has to be considered how financial assets are harnessed to create added value and how the customers are willing to pay for that created value. On the other hand, from the human point of view, it has to be considered, which human characteristics (properties) are the best that human performance can be related with. Within the concept of actuality, the current state with existing resources can be managed. By developing these resources and by removing relevant constraints, the potentiality can be increased. It raises the question: “What might then be the best possible way to develop those resources?”
March 15, 2010
14:45
366
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
According to Senge (2004), learning organization has several issues to consider, i.e., systems thinking, team learning, shared vision, mental models, and personal mastery. In the context of our co-evolutionary paradigm, all of these concepts are important, and each of them should be developed simultaneously with another, in the co-evolutionary way. In the personal mastery concept, the driving force behind co-evolution is the creative tension. On the other hand, in the business processes the people who face the real world each day in their work are the people who understand the current and future state of the business process, i.e., proactive vision, the best. By gathering all their individual opinions, the collective view of the organization regarding the proactive vision can be attained for making better performance through peoples’ understanding and motives. 2.3. Organizational Innovation Enablers and Barriers Many authors have identified those enablers and barriers of organizational innovation (e.g., Amabile et al., 1996; Amabile, 1997, 1998; Ekvall, 1996; Martins and Terblanche, 2003; Trott, 2005). Suominen et al. (2008a) have discovered from literature, 22 of those enablers and/or barriers and illustrated them with a vivid metaphor of Hydro Power Plant, Fig. 4.
Freedom of flow
Direction of flow
Innovation Culture
Freedom Openness and trust Communication Requisite variety Understanding strategy Organizational flexibility Stress management Changeability Challenge Empowerment Constructive feedback Risk tolerance Organisation support development Organisation support learning
Transformation of the flow
Idea generation Idea documentation Idea screening and evaluation
Maintaining of the flow
Teamwork and collaboration Seeking information Absorptive capacity Networking Situational constraints
Figure 4.
Innovation culture ontology.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
367
The four part metaphor first illustrates the organizational climate variables, then the organizational management and structure variables, thirdly the innovation process variables, and fourth the supportive organizational variables. Here, the entity of these four parts of the metaphor constructs a metaphor called Innovation culture. However, most of these enablers and barriers are more or less visual or concrete parts of organization, whereas culture is often seen more as an invisible, yet sensible part of organization to their members and even to a third party (McLean, 2005; Schein, 2004). 2.4. Innovator’s Competences Innovator’s competences are like items on a menu, we can identify those that represent our strengths and those that represent our weaknesses and those which we want to focus on (cf. Miller, 1987). The Innovator’s competence ontology describes those competences that the literature (Jussila et al., 2008) emphasizes as important characteristics of creative and innovative people. The major components of individual creativity necessary in any domain are expertise, creative-thinking skills, and intrinsic task motivation (Amabile, 1997). However, rarely an individual is able to rely solely on his own motivation and technical skills to get the job done, most of us work in environments in which we must constantly deal with other people (Merrill and Reid, 1999). The same is true for innovations; innovations are hardly ever the result of only one individual. Therefore more than creativity is needed in making innovations happen. The major components supporting creativity in the ontology are self-awareness, self-regulation, empathy, and relationship management (Goleman, 1998). The innovator’s competence ontology consists of two parts (personal competences and social competences) and seven major components (selfawareness, self-regulation, motivation, expertise, creative thinking, empathy, and relationship management) divided into a total of 27 competences (Fig. 5). The clustering of innovator’s competence ontology is theoretical and based on the earlier theoretical models (Amabile, 1997; Goleman, 1998). More recently, Goleman (2006) has published a model of social intelligence that parallels emotional intelligence. However, as he has pointed out: “The model of social intelligence . . . is merely suggestive, not definitive, of what that expanded concept might look like . . . More robust and valid models of social intelligence will emerge gradually from cumulative research” (Goleman, 2006, p. 330). 3. Methodology 3.1. Self-Evaluation of Humans and Systems In self-evaluation, a person is evaluating oneself, or a system that this individual evaluator is part of. The results from self-evaluation can be used for different purposes, such as motivation, identification of development needs, evaluation
March 15, 2010
14:45
368
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al. Self-awareness
Personal competences
Self-regulation
Motivation
Innovator’s competences p
Expertise
Creative thinking g
Social p competences
Figure 5.
Accurate self-assesment Self-confidence Flexibility Independence Responsibility Self-control Stress tolerance Trustworthiness Absorptive capacity Professional and technical expertise Stress tolerance Trustworthiness Analytical thinking Conceptual thinking Divergent thinking Intuitive thinking
Empathy
g g diversity y Leveraging Understanding others
Relationship management
Communication Conflict management Relationship building Teamwork and cooperation
Innovator’s competence ontology.
of potential, evaluation of performance, career development purposes, etc. (cf. Nurminen, 2003). Self-evaluation is an efficient method of developing oneself, managing personal growth, clarifying roles, and committing to project related goals (e.g., Nurminen, 2003). On the other hand, self-evaluation has limitations too. The results of a self-evaluation are less reliable in the evaluation of work performance (Stone, 1998). People have the tendency to evaluate their own performance better than others (Dessler, 2001). People are also limited in their ability to observe themselves and others accurately (Beardwell and Holden, 1995). Still, there is no question that people are able to evaluate themselves if they are motivated to do so. We have observed that the presentation of self-evaluation projects to target is very important. The effectiveness of self-evaluation also depends on content of the evaluation, application method, and the culture of an organization (Torrington and Hall, 1991). The results of self-evaluations conducted by an individual, vary to some extent. In the short term, the results change because individuals’ power of observation, intentions, and motives change (Cronbach, 1990). In the long term, the results also change because of mental growth, learning, and changes in personality and health. Self-evaluation is more effective in evaluating the relation between different items, such as competencies, than comparing individuals’ performance to others’ performance (cf. Torrington and Hall, 1991). In our approach, competences and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
369
systems are evaluated indirectly through the statements related to individuals’ every day work — therefore individuals are not evaluating their performance. In this context, we mean self-evaluation of innovation culture (the system) and innovator (human in the system). When we want to include the concept of creative tension (Senge, 1994) in an evaluation, we must use self-evaluation. This is because no one can tell the future intentions and aspirations of another person. Noticeable is that the data generated though self-evaluation has a certain nature. For instance every single individual has their own, personal scale of degree. Therefore traditional scientific statistical methods are not applicable for such data. Thus as analysis method, we have used Friedman test, which is suitable for non-parametric data produced with self-evaluation. The Friedman test is one scientifically valid non-parametric statistical method (Conover, 1999), named after its inventor Nobel Laureate economist Milton Friedman. The Friedman test sums the ranked values of each respondent. Consequently the ranked values can be clustered into groups (Suominen et al., 2008b). 3.2. The Evolute Application Environment Evolute is the name of a generic web-based technology that supports fuzzy logic (Zadeh, 1965) applications on the Internet (Kantola, 2005). Evolute supports special purpose fuzzy logic applications to be developed and run globally. Each application is based on a specified ontology of the target domain (Kantola, 2005). Therefore, each application on Evolute has a unique content and structure specified by the experts of the target domain. Applications can be added and fine-tuned without additional programming. Evolute supports co-evolutionary applications, which are intended for helping in simultaneous development of business enterprises or systems (Vanharanta et al., 2005) that include humans and organizations. 3.3. Self-evaluation of Innovation Competence and Organizational Innovation Enablers and Barriers with Evolute-System Individual’s creativity and organizational innovation have been linked with each other in the literature previously (e.g., Amabile, 1997, 1998; Amabile et al., 1996; Martins and Terblanche, 2003; McLean, 2005). Creativity is a characteristic of an individual, whereas innovation is a process, often within an organization (McLean, 2005). Therefore, creating Management Object Ontology (MOO) (Kantola, 2005) regarding organizational innovation also requires the perception of individual’s capability, i.e., competence to innovate to be included. In this chapter, the first three phases of five phases of ontology development (Sure et al., 2003), feasibility study, kickoff, and refinement (Fig. 6), of two MOOs with co-evolutionary method are presented.
March 15, 2010
14:45
370
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
Decisions
Go/ No Go?
Outcomes
Common KADS Worksheets
Feasibility study
Sufficient requirements?
ORSC + Semiformal ontology description
Meets requirements?
Target ontology
Refinement
Kickoff
Evaluation
Roll-out?
Evaluated ontology
Changes?
Evolved ontology
Application &Evolution
Human issues
Knowledge Management Application
Iterations Identify… 1.Problems and opportunities
5. Capture requirements specification in ORSD
2.Focus of KM application
6. Create semiformal ontology description
3. (OTK-) Tools 4. People
7. Refines emi- 10. Technology13. Apply formal ontology focused ontology description evaluation 8. Formalize into11. User-focused 14. Manage targetontology evaluation evolution and maintenance 9. Create 12. OntologyPrototype focused evaluation
Software engineering
Ontology Development
Figure 6. The knowledge meta process (adapted Staab and Studer, 2003, p. 121).
In the first two overlapping phases, the feasibility study and kickoff for Knowledge Management Application (KMA) for innovation was studied. The preliminary study regarding organizational innovation, carried out mainly by literature review, brought forward the facts that innovation as an object of study is both timely, even fashionable and additionally that some parts of innovation, especially regarding organization, are rather poorly researched (McLean, 2005). This is due to the fact that organizational study regarding innovation has many problems and difficulties: on one hand organizations are different in their size, branch, personnel, and focus on e.g., innovation. On the other hand, many of the methods for studying, e.g., organization culture are normally time consuming and require heavy, even subjective analysis by the researchers. This also makes the results indefinite and constrained to a certain time slot, lacking the desired focus to the future. As a goal to create a KMA for leadership purposes, the focus of the application was determined as duplex: on one hand to organizational enablers and barriers for innovation, on the other hand to individuals’ innovation competences. Naturally, there is a wide range of other subjects of innovation that could have been the focus of the study, but these topics were seen most usable from the management’s point of view for leading the entire personnel of an organization by gathering the information from bottom-up and then using the collected collective data for determining the needed management procedures.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
371
The sources for creating the semi-formal ontology description were scientific, mainly journal literature. The literature handled individual competences, also capabilities for innovation, innovation inductive organizational culture and climate and innovation process among others. The literature review resulted from 27 individual innovation competences (Jussila et al., 2008) and 22 organizational innovation enablers and barriers (Suominen et al., 2008a). In this first phase, evolute-system was used as a platform for creating two self-evaluation questionnaires, collecting data and doing the needed computation operations for accumulating the data into collective result. The questionnaires — innovation competence questionnaire having 103 statements and the organizational innovation questionnaire having 94 statements — have sliding scale, a bar that allows the responder to answer on an individual range. The people evaluate subjectively one’s individual capabilities and objectively the environment — in this case one’s organization. The other evolute characteristic is that the evaluation answer is placed on both for current and future states, thus illustrating the creative tension (Senge, 1994) of individuals, or likewise proactive vision for organizations. Besides, the evolute-system uses fuzzy logic for the computation operations to simulate human reasoning, which by nature is fuzzy. Those two questionnaires were formulated according to the found competences and organizational enablers and barriers, each competence and enabler or barrier including 3 to 8 statements for each responder to answer. As a result of the refinement phase, the first version of the parallel created semiformal ontologies was represented (Table 1). Most of the individual variables have a counter part with the organizational ontology. The next phase in the ontology development process would be the evaluation phase including an interview of the test runs with the web-based questionnaires in order to gather comments from the test persons of those two ontologies. This evaluation round should then be followed by a new iteration round of refinement. 4. A Case Test Run After the completion of the semi-formal ontology, a test run was done at an educational and research unit of University with a group. First the results of the individual innovation competences are illustrated in a bar chart (Fig. 7). The n = 10 signifies the 10 respondents of staff members and α = 0.05 signifies the used significance level of 0.05. The sums have been divided into three groups: the most significant (black bars), middle group (white bars), and the least significant (grey bars). The gap between individual’s innovation competences’ current and future states portrays the creative tension, whereas the gap between organization’s innovation enablers and barriers current and future states portrays the proactive vision. Thus, both the creative tension and proactive vision, point out those competences or enablers and barriers that need the most attention: where the current state is relatively low compared to the future desired state. This way the organization’s
March 15, 2010
14:45
372
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
Table 1. Individual Innovation Competences and Organizational Innovation Enablers and Barriers (Adapted from Suominen et al., 2008b). Individual innovation competences Connection to innovation enablers and barriers (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22)
Absorptive capacity Accurate self-assessment Achievement orientation Change orientation Communication Flexibility Independence Initiative Stress tolerance Leveraging diversity Professional and technical expertise Relationship building Risk orientation Seeking information Self-development Teamwork and cooperation Trustworthiness Analytical thinking Conceptual thinking Divergent thinking Imagination Intuitive thinking
Organizational innovation enablers and barriers Connection to innovation competences (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18)
Absorptive capacity Constructive feedback Challenge Changeable Communication Flexibility Freedom Empowerment Stress management Requisite variety Organization support learning Networking Risk tolerance Seeking information Organization support development Team work and collaboration Trust and openness Idea generation
No connection to innovation enablers and barriers
No connection to innovation competences
(1) (2) (3) (4) (5)
(1) (2) (3) (4)
Conflict management Responsibility Self-control Self-confidence Understanding others
Idea documentation Idea screening and evaluation Understanding strategy Situational constraints
management can direct their attention to those matters requiring most urgent development. With creative tensions we came into groups where the first 13 rankings were the most significant and the last 8 the least significant. With proactive vision the rankings were divided into 3 groups, where first 5 first were the most significant
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
373
RESULTS OF FRIEDMAN TEST Absorptive capacity Intuitive thinking Professional and techical expertise Self-confidence Understanding others Communication Analytical thinking Accurate self-assessment Flexibility Self-development Self-control Stress tolerance Conceptual thinking Relationship building Initiative Leveraging diversity Conflict management Seeking information Change orientation Imagination Responsibility Teamwork and cooperation Achievement orientation Trustworthiness Divergent thinking Independence Risk orientation 0
5
10
15
20
25
INNOVATION COMPETENCE Creative tension: n = 10, α = 0.05
Figure 7.
Results of individual innovation competence self-evaluations.
and the last 16 the least significant. Unlike the creative tension results, the middle group of proactive vision is a group that remains undecided to which group it better belongs to, the most significant or the least significant. Then the results of the organization’s innovation self-evaluations are presented (Fig. 8). In order to give suggestions of the development needs according to the results presented above, both the results of the individual’s innovation competences and organization’s support to those competences have to be compared parallelly. This is where the co-evolution of the two ontologies becomes handy. As both of these ontologies have been constructed together, most of the features and competences
March 15, 2010
374
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
RESULTS OF FRIEDMAN TEST Networking Stress management Teamwork and collaboration Constructive feedback Absorptive capacity Organization support learning Organization support development Idea documentation Changeability Organizational flexibility Communication Requisite variety Challenge Comprehending strategy Risk tolerance Openness and trust Seeking information Idea generation Freedom Situational constraints Empowerment Idea screening and evaluation 0
5
10
15
20
ORGANIZATIONAL INNOVATION ENABLERS AND BARRIERS Creative tension:n = 10, α = 0.05
Figure 8.
Results of organization’s innovation self-evaluations.
have a direct counterpart in the other ontology. Therefore, further analysis is made by finding different combinations that can be discovered by comparing the status of those pairs in the formed clusters. Below the following combinations of the pairs have been discovered: high–high, low–high, high–low, and low–low. Also the interpretations of those combinations are enlightened. 1. In high–high combination, the creative tension is high on both individual innovation competence and organization’s innovation enablers. Table 2 can be interpreted as those individual’s innovation competences and organization enablers that need the most attention. 2. In low–high combination, the creative tension of individual innovation competence is low, thus portraying the satisfaction of the current state of these competences; whereas the creative tension of organization’s innovation enablers is high, thus desire the development of these organizational support features. Table 3 can be interpreted that this organizational enabler “Teamwork and collaboration” needs attention when considering the development measures. However, people
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
Table 2.
375
Results of test run with high–high combination.
Value
Ranking
People want to develop within themselves 21,7 Absorptive capacity 17,8 Accurate self-assessment 16,05 Stress tolerance 21,1 Professional and technical expertise
1 8 12 3
People want to be developed within organization 13,75 14,9 15,7 12,75
Table 3. Value
Absorptive capacity Constructive feedback Stress management Organization support learning
5 4 2 6
Results of test run with low–high combination. Ranking
People do not see need for development within this competence 9 Teamwork and cooperation 22 People need for development within this organizational feature 15 Teamwork and collaboration 3
feel that their competence with “Teamwork and co-opearation” do not need that much development in the future. 3. In high–low combination, the creative tension of individual innovation competence is high, thus portraying desire of development; whereas the creative tension of organization’s innovation enablers is low, thus portraying the satisfaction of the current state of the support from organization. Table 4 can be interpreted that there are individual competences that people want to develop, however those organizational enablers as support, are at good level for this development to happen. 4. In low–low combination, the creative tension of individual innovation competence is high, thus portraying good level, and similarly the creative tension of organization’s innovation enablers is low, thus portraying the satisfaction of the current state of the support from organization. Table 5 can be interpreted that with these individual competences and also organizational enablers supporting those competences are at good level. These competences and organizational innovation enablers are the stone base of this organization’s innovativeness.
March 15, 2010
376
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
Table 4.
Results of Test Run with High–Low Combination.
Value
Ranking
People want to develop within themselves 21,2 Intuitive thinking 18,1 Analytical thinking 16 Conceptual thinking 18,3 Communication 16,8 Flexibility 16,7 Self-development
2 7 13 6 9 10
People do not see too much need for development within organization 9,5 9,5 9,5 11,05 11,2 12,15
Table 5. Value
Idea generation Idea generation Idea generation Communication Organizational flexibility Organization support development
18 18 18 11 10 7
Results of Test Run with Low–Low Combination. Ranking
People do not see too much need for development within themselves 10,7 Imagination 20 7,9 Divergent thinking 25 8,3 Achievement orientation 23 8,15 Trustworthiness 24 7,05 Independence 26 4,5 Risk orientation 27 People do not see too much need for development within organization 9,5 Idea generation 18 9,5 Idea generation 18 10,3 Challenge 13 9,7 Openness and trust 16 9,4 Freedom 19 9,8 Risk tolerance 15
However, when considering this or any other organization’s development measures, the entire palette should be seen holistically. Partial optimization or overweighting single organizational enablers may cause more damage than good. Therefore, common sense and experience of the organization while making the development
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
377
measures can be recommended. Naturally also the current and future wanted status of one competence or organizational enabler has to be considered, as the creative tension does not portray the entire situation. Additionally, it should be stressed that this representation of competences or organizational innovation enablers is not a truth, nor it is its intention to be. It is merely a glimpse of one vision to one organization’s reality. 5. Conclusions This chapter has sought to explore the linkage of individual innovation competences and organizational innovation through the concepts of creative tension and proactive vision. The theoretical background of this study is in co-evolutionary creation of MOOs (management object ontologies). These ontologies are the basis for building up dynamic computer supported questionnaires for data collection. The collected individual data can then be gathered for comprehending the organizational collective vision of the future state together with future state, thus portraying the creative tension and proactive vision. The study suggests that building MOOs of individual innovation competence and organizational innovation enablers and gathering that information via questionnaires is the first step of collecting interesting and comparable data of the two sides of innovation: the individual and organizational. This new way of collecting innovation data may bring interesting information of innovations in the organizations in the future. When this data collection is then moved, expanded from individual and organizational level to national and even international level, the true nature of innovation may be revealed — at least from one very essential point of view: those people working in the organizations. In summary, we suggest MOO when combined with data collection is an effective way to approach the complex concept of innovation within organizations in the first place. Furthermore, finding answers to the initial question of how to boost innovation culture and innovators remains in the future, but this approach gives a tool to gather more valuable information on the essence of innovation competence and organizational innovation. The approach described in this chapter has implications on the management. The abstract concepts of innovation culture and innovation competence become manageable, which suggests that organizations should be able to get better “innovation results.” Acknowledgements Finnish Funding Agency for Technology, i.e., TEKES has been the main financing body for this research. We refer to the Flexi project E!3674 ITEA2 Flexi — Added Value Ontology, decision Tekes 40176/07.
March 15, 2010
378
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
References Amabile, TM (1997). Motivating creativity in organizations: On doing what you love and loving what you do. California Management Review, 40(1), 39–58. Amabile, TM (1998). How to kill creativity. Harvard Business Review, September–October, 77–87. Amabile, TM, R Conti, H Coon, J Lazenby and M Herron (1996). Assessing the work environment for creativity. Academy of Management Journal, 39(5), 1154–1184. Aramo-Immonen, H, J Kantola, H Vanharanta and W Karwowski (2005). The web based trident application for analyzing qualitative factors in mega project management. Proceedings of the 16th Annual International Conference of IRMA2005, Information Resources Management Association International Conference, San Diego, California, May 15–18, 2005. Beardwell, I and L Holden (1995). Human Resource Management — A Contemporary Perspective. Pitman Publishing. Beer, S (1994). Brain of the firm, 2nd edn. Chinchester, Wiley. Conover WJ (1999). Practical Nonparametric Statistics, 3rd Edn. New York: John Wiley & Sons. Cronbach, LJ (1990). Essentials of Psychological Testing. New York: Harper Collins. Cumming, BS (1998). Innovation overview and future challenges. European Journal of Innovation Management, 1(1), 21–29. Dessler, G (2001). A Framework for Human Resource Management, 2nd Edn., New Jersey: Prentice Hall. Edvinsson, L and SM Malone (1997). Intellectual Capital. New York: HarperCollins Publishers. Ekvall, G (1996). Organizational climate for creativity and innovation. European Journal of Work and Organizational Psychology, 5(1), 105–123. Etzkowitz, H and L Leydesdorff (1998). The endless transition:A “triple helix” of universityindustry-government relations. Introduction to a thems issue. Minerva, 36, 203–208. Etzkowitz, H (2003). Innovation in innovation: The triple helix of university–industry– government relations, Social Science Information, 42(3), 293–337. Goleman, D (1998). Working with Emotional Intelligence. London: Bloomsbury. Goleman, D (2006). Social Intelligence: The New Science of Human Relationships. London: Hutchinson. Jackson, CM (2004). Systems Thinking: Creative Holism for Managers. West Sussex, England: John Wiley & Sons. Jussila, J,A Suominen and HVanharanta (2008). Competence to innovate? In Karwowski, W. and Salvendy, G. (eds.) 2008 AHFE International Conference, 14–17 July 2008, Caesars Palace, Las Vegas, Nevada, USA, Conference Proceedings, 10 p. Kantola, J (2005). Ingenious management, Doctoral thesis, Tampere University of Technology at Pori, Finland. Kantola, J, H Vanharanta and W Karwowski (2005). The evolute system: A co-evolutionary human resource development methodology. In: International Encyclopedia of Human Factors and Ergonomics, Vol. 3, W Karwowski (ed.), pp. 2902–2908. Kay, J (1993). Foundations of Corporate Success. New York: Oxford University Press. Kidd, D (1985). Productivity analysis for strategic management. In Guth, W. (ed.) Handbook of Business Strategy (17/1–17/25). Massachusets: Gorham & Lamont.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
379
Martins, EC and F Terblanche (2003). Building organisational culture that stimulates creativity and innovation. European Journal of Innovation Management, 6(1), 64–74. McLean, LD (2005). Organizational culture’s influence on creativity and innovation: A review of the literature and implications for human resource development. Advances in Developing Human Resources, 7(2), 226–246. Merrill, DW and RH Reid (1999). Personal Styles & Effective Performance. New York: CRC Press. Miller, WC (1987). The Creative Edge. Cambridge: Perseus Publishing. Nurminen, K (2003). Deltoid — The competences of nuclear power plant operators, Master of Science Thesis, Tampere University of Technology at Pori, Finland. OECD (1999). Managing National Innovation Systems. Ed. OECD Publications Service, Paris. Paajanen, P, J Kantola and H Vanharanta (2004). Evaluating the organization’s environment for learning and knowledge creation. 9th International Haamaha Conference: Human & Organizational Issues in the Digital Enterprise, Galway, Ireland, 25–27 August 2004. Schein, EH (2004). Organizational Culture and Leadership, 3rd Edn. San Francisco, CA: Jossey-Bass., 438 p. Senge, PM (2004). Presence: Human purpose and the field of the future. Society for organisatgional learning, Combridge, MA. Senge, PM (1994), The Fifth Discipline: The Art and Practice of Learning Organization. New York: Currency Doubleday. Stone, R (1998). Human Resource Management, 3rd Edn. Brisbane: John Wiley and Sons, 854 p. Suominen, A, J Jussila and H Vanharanta (2008a). Hydro power plant — metaphor for innovation culture. In AHFE International Conference, Karwowski, W and Salvendy, G (eds.) 14–17 July 2008, Caesars Palace, Las Vegas, Nevada, USA, Conference Proceedings, 10 p. Suominen, A, J Jussila, P Porkka and H Vanharanta, (2008b). Interrelations of development needs between innovation competence and innovation culture? In Proceedings of the 6th International Conference on Manufacturing Research (ICMR08), Brunel University, UK, 9–11 September 2008. Sure, Y, S Staab and R Studer (2003). On-to-knowledge methodology (OTKM). In Handbook on Ontologies. Staab, S and Studer, R (eds.) Berlin: Springer, 117–132. Torrington, D and L Hall (1991). Personnel Management — A New Personnel Approach, 2nd Edn., London: Prentice Hall, 661 p. Trott, P (2005). Innovation Management and New Product Development, 3rd Edn., Essex: Pearson Education Limited. Ulijn, J and T Brown (2004). Innovation, entrepreneurship and culture, a matter of interaction between technology, progress and economic growth? An introduction. In Innovation, Entrepreneurship and Culture. Brown, T and J Ulijn (eds.) Cheltenham, UK: Edward Elgar. Vanharanta, H, J Kantola and W Karwowski (2005). A Paradigm of Co-Evolutionary Management: Creative Tension and Brain-Based Company Development Systems. Las Vegas, Nevada, USA: HCI International. Zadeh, LA (1965). Fuzzy sets. Information and Control 8: 338–353.
March 15, 2010
380
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
Biographical Notes Andrea Bikfalvi has a PhD in Business Administration and she conducts teaching and research activities in the Department of Business Administration and Product Design at the University of Girona, Spain. Her research interest focuses on technological and organizational innovation, new technologies in business contexts, academia-business relationships and new venture creation, as well as large-scale surveys. Since her incorporation, she undertook teaching activities and participated on various research projects. Some of the projects refer to the topic of international surveys, university spin-offs; other issues concern innovation in education and teaching, networks of innovation, and research in teaching. Jari Jussila is a PhD candidate at TUT, Finland. He holds an MSc (Industrial Management and Engineering), and his research interest is focused on knowledge management. His main experience is derived from information systems projects. Since 2007 he has been working as a managing consultant at Yoso Services Oy. Anu Suominen is a PhD candidate at Tampere University of Technology (TUT) in Finland. She holds an MSc (Industrial Management and Engineering), and her research interest is towards leadership and management: from strategy, knowledge, and innovation point of views. Her prior working experience is from logistics, particularly operational exports in metal and information network industries. Since 2007 she has been working as a researcher at TUT. Jussi Kantola works as an associate professor in the world’s first knowledge service engineering department at KAIST (Korea Advanced Institute of Science and Technology) in Korea. He is an adjunct professor at Tampere University of Technology in the Department of Industrial Management and Engineering in Pori, Finland. His research and teaching interests currently include application of ontologies, elearning and soft-computing. He received his first PhD degree at the University of Louisville in the Department of Industrial Engineering in USA in 1998. He received his second PhD degree at Tampere University of Technology in the Department of Industrial Management and Engineering in Finland in 2006. Earlier he has worked as an IT consultant in USA and business and process consultant for ABB in Finland. Professor Hannu Vanharanta, 1949, began his professional career in 1973 as a Technical assistant at the Turku office of the Finnish Ministry of Trade and Industry. In 1975–1992, he worked for Finnish international engineering companies, i.e., Jaakko P¨oyry, Rintekno, and Ekono as process engineer, section manager, and leading consultant. His doctoral thesis was approved in 1995. In 1995– 1996, he was a professor in Business Economics in the University of Joensuu. In 1996–1998, he served as a Purchasing and Supply Management professor in the
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
381
Lappeenranta University of Technology. Since 1998 he has been a professor in Industrial Management and Engineering in Tampere University of Technology at Pori. The research interests are: human resource management, knowledge management, strategic management, financial analysis, e-business, and decision support systems.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
Chapter 16
A Decision Support System for Assembly and Production Line Balancing A. S. SIMARIA The Advanced Centre for Biochemical Engineering, Department of Biochemical Engineering, University College London, Torrington Place, London WC1E 7JE, United Kingdom
[email protected] A. R. XAMBRE∗ , N. A. FILIPE† and P. M. VILARINHO‡ Department of Economics, Management and Industrial Engineering, University of Aveiro, Campus Universit´ario de Santiago, 3810-193 Aveiro, Portugal ∗
[email protected] †
[email protected] ‡
[email protected]
In this chapter, a system to support the design of assembly and production lines used in the make-to-order production phase is presented. The relevance of the system is supported by the fact that current market dynamics often leads to frequent modifications in the allocation of manufacturing resources, and, as a result, decisions related to manufacturing systems design that used to belong to the strategic level are now taken at the tactical level, and thus require new tools to support them. The decision support system addresses two different categories of problems: (i) assembly line balancing and (ii) production line balancing. Due to the high complexity of these problems, it uses heuristic methods based on evolutionary computation approaches to tackle them. The system aggregates several modules to address the different problems, a database to handle both input and output data and an interface that enables a user-friendly interaction with the decision maker. Keywords: Assembly lines; production lines; manufacturing system design; evolutionary computation.
1. Introduction The current global marketplace environment, characterized by intense competition, together with the increased pace of technological change, has led to the shortening of product life cycles and an increase in product variety. Industrial companies must 383
March 15, 2010
384
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
be able to provide a high degree of product customization to fulfill the needs of an increasingly sophisticated customer demand. Moreover, responsiveness in terms of short and reliable delivery lead times is requested by a market where time is seen as a key driver. Mass customization is a response to this phenomenon. It refers to the design, production, marketing, and delivery of customized products on a mass basis. This means that customers can choose, order, and receive especially configured products, often selecting from a variety of product options, to meet their individual needs. On the other hand, customers are not willing to pay high premiums for these customized products compared to competing standard products in the market. They want both flexibility and productivity from their suppliers. To respond to this changing environment, industrial companies need to maximize the usage rate of their production resources, namely assembly lines. Historically, assembly lines were used to produce a low variety of products in high volumes, as they allowed for low production costs, reduced cycle times, and accurate quality levels. These are important advantages from which companies can derive benefits if they want to remain competitive. However, single-model assembly lines (used over the past decades), designed to a single homogenous product, are the production systems least suited to high variety demand scenarios. As manufacturing is shifting from high volume/low mix production to high mix/low volume production, mixed-model assembly lines in which a set of similar models of a product can be assembled simultaneously are better suited to respond to new market demands. Cellular manufacturing systems are another form of production system suited to coping with high product variety and short lead times. In this type of system, functionally different equipments are grouped together into manufacturing cells to produce a set of product families, and each cell can be seen as a production line. The use of an appropriate type of assembly line (namely, mixed-model, U-shaped, two-sided, etc.) or production line (namely, manufacturing cells), suited for the new manufacturing demand paradigm, is, therefore, a crucial factor for the success of the company in delivering customized products at low costs. Having developed a set of models to help decision-makers in the design of mixed-model assembly lines with parallel workstations of different types (straight lines, U-lines, and two-sided lines) and also a model for balancing a manufacturing cell (or production line), the next step was to incorporate them into a decision support system (DSS) that can help the decision-maker to not only interact with the models but also to generate and compare different solutions to the problems under analysis. The main purpose is to provide a user-friendly interface, allowing the line designer to be, in a simple but powerful way, assisted in his final decision. The following section of this chapter presents the structure of the main elements of the proposed DSS, namely the data and model management bases and the interface. Section 3 explains in a more detailed way the models that are available in the system and finally in Section 4 some conclusions are pointed out.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
Figure 1.
385
DSS components.
2. Structure of the Decision Support System The proposed system includes the three typical subsystems of a DSS: data management, model management, and interface (Turban and Aronson, 1998). These three elements are connected and interact with the user as shown in Fig. 1. A brief explanation of these elements is given in the following paragraphs. 2.1. Model Management The model management subsystem includes the algorithms, previously developed by the authors, to address the different types of assembly and production line balancing problems. The assembly line balancing problem (ALBP) arises when designing (or redesigning) an assembly line and it consists in finding a feasible assignment of tasks to workstations in such a way that the assembly costs are minimized, the demand is met, and the constraints of the assembly process are satisfied. The type I ALBP aims at minimizing the number of workstations for a given cycle time, while type II aims at minimizing the cycle time for a given number of workstations. This problem has been extensively researched and comprehensive literature reviews addressing it include the works of Ghosh and Gagnon (1989), Scholl (1999) and more recently by Becker and Scholl (2006) and Scholl and Becker (2006). The algorithms included in the DSS are meta-heuristic based procedures, used to balance assembly lines with characteristics that better reflect the industrial reality, namely: • Mixed-model: Mixed-model assembly lines allow the simultaneous assembly of a set of similar models for a product, which may be launched in the assembly line in any sequence. As the trend for current markets is to have a wider product range
March 15, 2010
386
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
and variability, mixed-model assembly lines are preferred over the traditional single-model assembly lines. • Parallel workstations: The use of parallel workstations, in which two or more replicas of a workstation perform the same set of tasks on different products, allows for cycle times shorter than the longest task time. This increases the line production rate and also provides greater flexibility in the design of the assembly line. • Two-sided lines: Two-sided assembly lines are a special type of assembly lines in which workers perform assembly tasks in both sides of the line. This type of lines is of great importance, especially in the assembly of large-sized products, like automobiles, buses or trucks, in which some tasks must be performed at a specific side of the product. • Flexible U-lines: When the demand of the products, and consequently the production volume of the line, is highly variable, the lines have to be frequently re-balanced. This represents a cost for the companies that could be reduced if the lines were easily adaptable to changes in production volumes and product mix. Flexible U-lines address these issues because whenever the production volume changes, the line layout remains but the number of operators working on the line and the tasks they perform will be adjusted in order to meet the demand. The increasing demand for personalized products has led to the necessity to alter the characteristics of traditional assembly lines and also led to the development of other types of production systems that are able to maintain high flexibility while keeping the main advantages of an assembly line. Cellular manufacturing has emerged as an alternative that collects the advantages of both productand process-oriented systems for a high variety and medium volume product mix (Burbidge, 1992). In a cellular manufacturing system, functionally diverse machines are grouped into dedicated cells used to exploit flow shop efficiency in processing the different parts. Cellular manufacturing systems are hybrid systems that exhibit the characteristics of process-based systems at the plant level and the characteristics of productbased systems at the cell level. In many cellular manufacturing systems the intracellular flow pattern has a behavior similar to an unpaced production line (where most of the workstations are machines) in which the parts do not forcefully follow the same unidirectional flow. In an unpaced line there is no fixed time for a resource to complete the respective workload. The design of the cell and the production line balancing problem becomes quite complex since it involves both workers and equipments and also tasks that require different types of resources. All these different types of problems were addressed using meta-heuristic based procedures and the resulting computer programs were included in the DSS. Section 3 will describe with more detail the main features that each of the models developed to address the different assembly and production line balancing problems.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
Figure 2.
387
Class diagram.
2.2. Data Management Figure 2 shows the class diagram, using the UML — Unified Modeling Language notation (Larman, 2004), for the database of the proposed DSS. The diagram supports both input and output data objects, and it was built so it can be used with every algorithm and type of problem considered in the system. Therefore, although there are some object classes common to every line balancing problem (e.g., Problem, Task, Solution), some were purposely created to fit the requirements of the addressed problems. The input data are associated with the characteristics of a specific line balancing problem. It is necessary to specify the set of tasks belonging to an assembly/production process, its precedence relationships, zoning constraints, and processing times for the different models, as well as the demand values for each model (and demand scenario in the case of flexible U-lines) and the production period.
March 15, 2010
14:45
388
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
This data will be the input for an adequate algorithm, selected from the set of procedures available in the system, which will then produce a balancing solution. The output data is the assignment of tasks to workstations, operators, and/or machines, depending on the nature of the problem addressed. For example, in straight assembly lines one workstation corresponds to one operator, while in flexible U-lines one workstation may have more than one operator working on it and one operator may perform tasks in more than one workstation. However, in either way, a task can only be assigned to one workstation and one operator. If the problem is related to production lines, a task can also be assigned to one machine. Each balancing solution provided by the system is characterized by a set of performance measures (e.g., line efficiency, workload balancing between and within stations). 2.3. Interface Figure 3 illustrates, using a simple flowchart, how the user can interact with the system. Essentially, the decision maker must first distinguish between an assembly line and a production line. This type of decision is assisted by information the user can find in the help menu easily accessible in every part of the system. Assembly line Type of line?
Type of configuration?
Straight line
Introduce data
Production line
Upload data No Save data
2-Sided line
Run the three algorithms
Compare solutions?
View best solution
Yes
Introduce data
No Save solution?
View other solutions Upload data
Yes Save data
Flexibe U-line
Run algorithm
View solution
Introduce data No Exit system? Upload data Yes Save data
Run algorithm
End
View solution
Introduce data
No
Upload data
Save data
Run algorithm
View solution
Save solution?
Yes
No
Figure 3.
Interaction with the DSS.
End
Exit system? Yes
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
389
If the goal is to balance a production line, the user will be directed to an interface where he can choose to upload a set of existing data, introduce a new set of data, or save data already introduced and/or modified. It is important to refer that the input data will be validated according to the requirements of the specific algorithm. Then the production line balancing algorithm can be activated and a solution is presented. If, however, the user chooses to balance an assembly line, then he has to decide the type of assembly line to study: straight line, two-sided line, flexible U-line. The options are basically the same referred previously (upload a set of existing data, introduce a new set of data, or save data) except in the case of a straight line. For this situation, the DSS includes three algorithms so it is possible to generate and compare three different solutions. The system presents the best one but the user can opt to verify the other solutions that were generated. After obtaining a solution to his/her problem the decision maker can save the solution or test another set of data. Figure 4 shows the interface used to introduce the data in a flexible Uline situation and Fig. 5 illustrates the output of for a simple instance of that problem. There was an effort to build user-friendly interfaces and, although the system is directed to specialist users, with a good understanding of the problems and algorithms, it will provide help to the decision maker in the selection of the most suited type of production system configuration. 3. Assembly and Production Line Balancing Algorithms Included in the DSS 3.1. Mixed-model with Parallel Workstations In a mixed-model assembly line, a set of similar models of a product are assembled in any sequence. The mixed-model nature of the problem on hand requires the cycle time to be defined taking into account the different models’ demand over the planning horizon. So, if a line is required to assemble M models each with a demand of Dm units over the planning horizon (P), the cycle time of the line is computed from the following equation: C = M
P
m=1 Dm
(1)
The overall proportion of the number of units of model m being assembled (or the production share of model m) is given by the following equation: Dm qm = M
p=1 Dp
(2)
March 15, 2010
390
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
Figure 4.
Input data interface.
The main features of the addressed assembly line balancing problems are the following: (i) the line is used to assemble a set of similar models of a product; (ii) the workstations along the line can be replicated to create parallel workstations, when the demand is such that some tasks have processing times higher than the cycle time; (iii) the assignment of tasks to a specific workstation can be forced or forbidden through the definition of zoning constraints. Taking into account these features, three types of constraints are defined for the assembly process: precedence constraints, zoning constraints, and capacity
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
Figure 5.
391
Output data interface.
constraints. The particular issues of these types of constraints are discussed in the following sections. 3.1.1. Precedence constraints The precedence constraints determine the sequence according to which the tasks can be assembled. Precedence constraints are usually depicted in a precedence diagram as the one presented in Fig. 6, where each node represents a task and each arc represents a precedence relationship between a pair of tasks (e.g., in the diagram shown in Fig. 6, tasks 8 and 9 are predecessors of task 12). A task can only be assigned to a workstation if it has no predecessors or if all of its predecessors have already been assigned to that workstation or to preceding workstations (e.g., in the diagram shown in Fig. 6, task 12 can only be performed after tasks 8 and 9 are concluded). In mixed-model assembly lines, the assembly process of the models must be sufficiently similar to the combination of the precedence diagrams of each model into a combined precedence diagram, from which the precedence constraints for the mixed-model assembly line balancing problem are derived. One should note,
March 15, 2010
392
1
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
3
4
8
14
5
9
12
6
11
13
7
10
19 15
23 18
16
17
24 21
25
22
20
2
Figure 6.
Example of a precedence diagram.
however, that tasks can have different processing times for the different models and may not even be needed for some of them. 3.1.2. Zoning constraints Zoning constraints can be positive or negative. Positive zoning constraints force the assignment of certain tasks to a specific workstation. In the proposed approach, the tasks that need to be allocated to the same workstation are merged and treated by the procedure as only one task. Negative zoning constraints forbid the assignment of tasks to the same workstation. In the proposed procedures, a task is not available to be assigned to a workstation if there is an incompatible task already assigned to that workstation. 3.1.3. Capacity constraints The proposed approach is meant to deal with labor-intensive assembly lines. This type of lines is usually staffed by low-skilled labor that can be easily trained and so the number of tasks assigned to each worker (or workstation) needs to be kept to a minimum. This is an important issue that needs to be accounted for when parallel workstations are allowed, as is the case of the proposed approach. To picture this issue, we refer the extreme case of the use of parallel workstations in which one can have just one workstation replicated enough times to account for all the models’ demand. In this extreme case, each worker performs all the tasks in the assembly process, contradicting the use of low-skilled labor. In developed procedures, the replication of workstations is controlled in such a way that parallel workstations are created only when the demand needs to be satisfied. For certain demand conditions, the assembly line needs to be operated with a cycle time such that some of the tasks in the assembly process have processing times higher than this cycle time. In this case, the proposed approach allows the replication of the workstation to which the tasks with processing time higher than the cycle time are assigned, in order for demand to be met.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
393
3.1.4. Objective function In assembly line balancing problems of type I, the main goal is to minimize the number of workstations, for a given cycle time(c). When parallel workstations are allowed, one must distinguish between the total number of operators in the line (S) and the number of different workstations in the line, which we call line length (LL). This way, if some operators carry out the same set of tasks in parallel workstations, there will be more operators than different workstations. The number of operators in an assembly line is given by S=
LL
Rk
(3)
k=1
where Rk is the number of replicas of workstation k. The goal of the ALBP-I is then to minimize S, for a given C. This goal can be attained maximizing an objective function like the weighted line efficiency (WE), which is computed as follows: I M t im qm · i=1 WE = (4) S·C m=1
where tim is the processing time of task i for model m. WE reaches a maximum of 1, if there is no idle time in the balancing solution, a perfect situation rarely found in real world assembly lines. However, using only this objective function as quality measure may not be enough to provide good balancing solutions, as all solutions with the same number of operators would have exactly the same value of WE. To ensure a smooth distribution of work, a secondary goal is introduced so that the procedure also balances the workloads between workstations (i.e. for each model, the idle time is distributed across the workstations as equally as possible). The balance between workstations (Bb ) is given by 2 LL M M LL 1 m=1 skm qm (5) − Bb = LL − 1 IT LL m=1
k=1
where skm is the idle time of workstation k due to model m and IT is the average idle time of the line, given by IT =
M LL
qm · skm
(6)
k=1 m=1
The value of Bb reaches the minimum of 0, when IT is equally distributed by all workstations in the line, thus the ideal situation. Considering the mixed-model nature of the problem, it is also necessary to balance the workloads within each workstation, thus ensuring that approximately
March 15, 2010
394
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
the same amount of work is carried out in each workstation regardless of the model being assembled. The goal of balancing the workloads within workstations is achieved by including the function Bw in the procedure, which is computed as follows: 2 LL M M q s M 1 m km m=1 Bw = − (7) LL(M − 1) Sk M m=1 k=1
where Sk is the average idle time of workstation k, given by Sk =
M
qm · skm
(8)
m=1
Like Bb , also Bw varies within the range [0,1], 0 being the perfect situation, where the workloads in each workstation are exactly the same for every model. The global objective function of the algorithm must then take into account the values of WE, Bb and Bw ; however, it is obvious that WE is more important than Bb and Bw , so its weight must be higher. This way, the objective function Z to maximize is given by Z = 10 · WE − Bb − Bw
(9)
For balancing mixed-model assembly lines with parallel workstations and zoning constraints in a straight line configuration, as described above, three procedures were developed to address both type I and type II problems: a simulated annealing based procedure (Vilarinho and Simaria, 2002), a genetic algorithm, (Simaria and Vilarinho, 2004), and an ant colony optimization (ACO) algorithm (Vilarinho and Simaria, 2006). The main features of these procedures are outlined in the following paragraphs. 3.1.5. Simulated annealing based approach The simulated annealing based procedure developed to address the described problem works in two stages and is fully presented in the paper of Vilarinho and Simaria (2002). In the first stage the procedure looks for a sub-optimal solution for the problem’s main goal — the minimization of the number of workstations. In the second stage, the additional goals of workload balancing are envisaged. In both stages a simulated annealing approach is used. The framework of this procedure is presented in Fig. 7. In the initial solution tasks are assigned to the lowest numbered feasible workstation by decreasing order of their positional weight and considering the individual task processing times for each model. In the first stage the procedure looks for the solution that minimizes the number of workstations in the assembly line, so the weighted line efficiency (WE), as defined in Eq. (4), is used as the objective function. A neighboring solution can be generated by one of the following actions: (i) swapping two tasks in different workstations or (ii) transferring a task to another
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing Initial solution Modified RPW heuristic
Miminise number of operators
YES
STOP?
1st stage Best solution 2nd stage Initial solution
NO
Neighbouring solution
Balance workloads between and within stations STOP?
2nd stage Best solution Heuristic Best solution
NO
Current solution
Current solution
Neighbouring solution
Swap or Transfer
Verify constraints? (precedence, zoning, capacity, first stage)
Y Verify constraints? (precedence, zoning, capacity)
YES
395
Y
NO
NO
Swap or Transfer
Solution in taboo list? FIRST STAGE
YES
YES
Solution in taboo list?
NO
NO
SECOND STAGE
Figure 7. The two stage simulated annealing based procedure.
workstation. The tasks to be swapped, as well as the task and the workstation for the transfer, are randomly chosen. For any of these actions to result in a new neighboring solution, the precedence, zoning, and capacity constraints must be fulfilled. When this is not the case, a new swap or transfer must be attempted. Only transfer movements may contribute to reduce the number of workstations, thus maximizing line efficiency. Nevertheless, swap procedures are also required to ease the generation of successful transfer movements. So, the probability of performing a transfer procedure must be higher than for the swap procedure and, by default, probabilities of 75% and 25% were set respectively, although the user can set different values. In both stages of the proposed procedure, a taboo list is used to maintain information about the most recently generated neighboring solutions, in order to avoid cycling. The goal of the second stage is to simultaneously balance the workloads between and within workstations, for the number of workstations obtained in the first stage. The initial solution of the second stage is the final solution found in the first stage. The criterion used to evaluate the neighboring solutions generated in this second stage derives directly from the objective functions Bb and Bw computed by Eqs. (5) and (6), respectively. The generation of neighboring solutions in the second stage also employs swap and transfer movements, but the tasks and workstations involved in these movements are selected to foster solutions for improvement, i.e., to improve workload smoothing. As the goal in this second stage is to balance the workloads, swap movements are more likely to contribute towards this end (probabilities of 75% for swap and 25% for transfer moves are set as the default). If after a predefined number of attempts neither swap nor transfer movements lead to a neighboring solution, tasks or workstations involved in these movements will be randomly selected to force
March 15, 2010
396
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
a new neighboring solution. For information about the used annealing schedule please refer to Vilarinho and Simaria (2002). 3.1.6. Genetic algorithms based approach The structure of the proposed genetic algorithm-based procedure to tackle the mixed-model assembly line balancing problem is a standard one, with its main steps presented in Fig. 8. This procedure is fully presented Simaria and Vilarinho (2004). A standard encoding scheme is used in which the chromosome is a string of length equal to the number of tasks. Each element of the chromosome represents a task and the value of each element represents the workstation to which the corresponding task is assigned. An example of this type of encoding scheme is presented in Fig. 9. START
Create initial population selection strategy
Create new individuals (crossover + mutation) replacement strategy
Form new population NO
stop? YES
STOP
Figure 8.
Chromosome
1 1
Global structure of the genetic algorithm based approach. 2 7 3 4
5 9 9
4
5 10 6 9 12 6 11 8 9 14 11 12 13 15 15
decoding Workstations Tasks
1
2 3
4
encoding 5
6
7 8
9
10
11
12
13 14
15
1 2 3 5 6 10 7 11 13 16 4 18 8 9 14 19 12 17 21 15 22 23 20 24 25
Figure 9.
Encoding scheme and corresponding balancing solution.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
397
The initial population is composed by a set of individuals (or chromosomes), each of them representing a solution for the addressed problem. The individuals of the initial population are generated via simple constructive heuristics. Each time a task must be selected for assignment from the set of available tasks, the heuristic randomly selects the priority rule to be used. The goal of genetic algorithms is to find the fittest individual over a set of generations. The fitness function is the one described in Eq. (9). The selection of the individuals for mating is done using a tournament, a very popular strategy that aims to imitate mutual competition of individuals during casual meetings, with the typical value of 2 for the tournament size. The main genetic operator is the crossover, which has the role to combine pieces of information from different individuals in the population. Two parents (P1 and P2 ) are selected from the tournament list and a crossover point (cp), an integer randomly generated from [1, LL], is selected. The combination of P1 and P2 will produce two offspring (O1 and O2 ). To generate offspring O1 (O2 ), the assignment of workstations 1 to cp is copied from P1 (P2 ) and the remaining positions are copied from the assignment of workstations cp + 1 to LL from P2 (P1 ). Figure 10 illustrates a crossover example. As it is shown in Fig. 10, although precedence constraints are verified, the crossover produces some tasks without any workstation associated (tasks 8, 9, 12, and 14 for O1 and tasks 10, 11, 13, and 16 for O2 ). These tasks must, therefore, be reassigned in order to achieve feasible individuals. The reassignment procedure aims to allocate the tasks to workstations in such a way that precedence and zoning constraints are satisfied and, if possible, the number of workstations is reduced. For each task i to be reassigned (starting with the tasks which have no precedent tasks to be reassigned), the procedure computes the earliest (Ei ) and the latest crossover point cp=7 P1
1 1 2 7 3 4 6 9
O1
1 1 2 7 3 4 6
P2
1 1 2 4 3
9 4 5 10 6 6
11 6 11 10 13 12 1313 14 15 15
7 7 5 5 8 8 6 9
5 11 9 11 10 13 12 1313 14 15 15
P1
1 1 2 7 3 4 6 9 9 4 5 10 6
9 12 6 11 8 9 14 1112 13 15 15
O2
1 1 2 4 3 7 7 5 5
5 12
P2
1 1 2 4 3
Figure 10.
4 5
9 12 6 11 8 9 14 1112 13 15 15
6
7 7 5 5 8 8 6 9
11 8 9 14 1112 13 15 15
5 11 9 11 10 13 12 13 13 14 15 15
Generation of two offspring through crossover.
March 15, 2010
398
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
(Li ) workstations to which task i can be assigned (Scholl, 1999), according to the precedence relationships between tasks. From the range of workstations [Ei , Li ], task i is assigned to the first one that meets the capacity and zoning constraints. When it is not possible to find a feasible workstation within [Ei , Li ], a new workstation is opened to perform the task. A mutation operator, which randomly disturbs genetic information, performing small changes in a single parent in order to produce a new offspring, is also included. A parent is selected to undergo mutation, according to a mutation probability, and a small set of tasks is randomly selected. These tasks will be reassigned applying the reassignment procedure earlier described and a new offspring is created. The mutation probability is set by default to 0.02, a typical value used in this technique and the number of tasks involved in mutation is, at maximum, 10% of the total number of tasks in the combined precedence diagram. The replacement strategy determines which individuals stay in the population and which are replaced. It also takes into account the fitness value of the individuals. Comparing each offspring with one of its parents, the offspring always replace the parent except when the fitness value of the offspring is lower than the worst fitness value of the individuals in the previous generations — in this case, the probability of the parent to continue in the population is set to a high value (0.8 by default). In order to always keep the best individual found so far, the individual in the new population with the lowest fitness is replaced by the individual from the previous generation with the highest fitness. A trade off between convergence and execution time is defined as the stopping criterion. This is a popular criterion used in GA-based approaches (Leu et al., 1994). The procedure will stop when either the fitness function of the best solution does not improve more than 1% after a predetermined number of consecutive iterations (this value is set to 50 by default), or when the total number of iterations exceeds a maximum number (200 is the value set by default). 3.1.7. ACO based approach ANTBAL is an ant colony algorithm-based procedure developed by Vilarinho and Simaria (2006) to address mixed-model assembly line balancing problems. An outline of the procedure is shown in Fig. 11. ANTBAL begins by creating a subcolony with N ants. Each ant in the sub-colony builds a feasible balancing solution, i.e., an assignment of tasks to workstations that satisfies precedence, zoning, and capacity constraints. For each feasible solution obtained, a measure of its quality is computed, according to the problem’s objective function. After all ants of a sub-colony have generated a solution, they release a certain amount of pheromone according to the quality of the solution. Pheromone trails are kept in a matrix task × task. If task j is performed immediately after task i, then a certain amount of pheromone is released between task i and task j. In this way, pheromone trails are built in the paths used by the ants to build the balancing solution.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
399
START
Create new sub-colony
Release new ant
Deposit pheromone
Ant builds a balancing solution
NO
Have all subcolonies been created?
Compute solution quality measures
NO
Have all ants built a solution?
Figure 11.
YES
STOP
YES
Update best solution
Outline of ANTBAL.
The procedure is repeated for every sub-colony within the ant colony. The best solution found by the procedure is updated after each sub-colony’s iteration. Figure 12 presents the procedure carried out by an ant to build a feasible balancing solution. An ant begins by determining the available tasks for assignment to the current workstation, taking into account the problem constraints: (i) precedence constraints, (ii) zoning constraints, and (iii) capacity constraints. Then, from the set of available tasks, it selects one of these tasks. When there are no available tasks to assign to the current workstation, a new workstation is opened. This procedure is repeated until all the tasks have been assigned. The probability of a task being selected from the set of available tasks is a function of: (i) the pheromone trail intensity between the previously selected task and each available task and (ii) the information provided by the heuristic for each available task. This information is a priority rule that is randomly assigned to each ant when the respective sub-colony is generated. The procedure uses some common static priority rules for the assembly line-balancing problem (e.g., maximum positional weight, maximum processing, maximum number of successors) and a new dynamic priority rule called “last task becoming available”, which was developed and included in the algorithm. The values of the priority rules will vary between 1 for the task with lowest priority and I (number of tasks) for the task
March 15, 2010
400
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al. START
Open workstation
Determine available tasks
NO
Are there available tasks? YES
Select task for assignment
NO
Figure 12.
YES
Have all tasks been assigned?
STOP
Procedure carried out by an ant to build a feasible solution.
with highest priority, and will be the heuristic information used by the ants to select the tasks. Let r be a random number between 0 and 1 and r1 , r2 and r3 three userdefined parameters such that 0 ≤ r1 , r2 , r3 ≤ 1 and r1 + r2 + r3 = 1. An ant n which has selected task i in the previous iteration will select task j by applying the following rule:
(exploitation) if r ≤ r1 J1 = arg max [τ(i,j) ]α [ηj ]β j∈Ani [τ(i,J2 ) ]α [ηJ2 ]β if r1 < r J2 : p(i,J2 ) = [τ(i,j) ]α [ηj ]β j∈An i j= (biased exploration) ≤ r1 + r2 J3 : random selection ofj ∈ A if r1 + r2 < r ≤ r + r + r (random selection) 1
2
3
(10) where τ(i,j) is the pheromone trail intensity in the path “selecting task j after selecting task i”, ηj is the heuristic information of task j (e.g., the priority rule value for
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
401
task j), Anj is the set of available tasks for ant n after the selection of task i and α and β are parameters that determine the relative importance of pheromone intensity versus heuristic information. The selection of a task from the set of available tasks is performed by one of three strategies: (i) Exploitation: it determines the selection of the best task according to the values of [τ(i,j) ]α [ηj ]β (ii) Biased exploration: a task is selected with a probability of p(i,j) as given by J2 in Eq. (10). (iii) Random selection: from the set of available tasks, the ant selects one at random. The first two strategies are based on the Ant Colony System state transition rule proposed by Dorigo and Gambardella (1997). After the task is selected, the ant assigns it to the current workstation. When all tasks have been assigned to workstations, the balancing solution is completed and solution quality measures are computed. The pheromone-release strategy is based on the one used by Dorigo et al. (1996). At the end of each sub-colony iteration, all balancing solutions provided by the ants have their objective function values computed. It is at this point that the pheromone trail intensity is updated. First, a portion of the existing pheromone value is evaporated in all paths, according to τ(i,j) ← (1 − ρ) · τ(i,j)
(11)
where ρ is the evaporation coefficient (0 ≤ ρ ≤ 1). Then, each ant n releases an amount of pheromone in the paths used to build the task sequence, according to the corresponding balancing solution quality. This amount of pheromone is given by Z, if in the solution built by ant n task j is performed n immediatly after task i (12) τ(i,j) = 0, otherwise The overall pheromone update effect of all ants in each path (i, j) is then τ(i,j) ← τ(i,j) +
N
n τ(i,j) .
(13)
n=1
At the beginning of the procedure, an initial amount of pheromone (τ0 ) is released in every path. 3.2. Two-sided Assembly Lines Typically, two-sided assembly lines are used in the production of large sized products, such as trucks and buses (Kim et al., 2000). The assembly process of this
March 15, 2010
402
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al. w-1
w-3
w-5
w-2
w-4
w-6
left side
right side
Figure 13.
Configuration of a two-sided assembly line.
type of products may be different from the assembly of small products, as some assembly tasks are required to be performed on a specific side of the product or at both sides of the product simultaneously (by different operators). The structure of a two-sided assembly line is depicted in Fig. 13. The line has two sides, left and right, and, in most cases, at each position there is a pair of workstations directly facing each other. The two opposite operators perform, in parallel, different tasks on the same individual item. The main difference between the assignment of tasks in one-sided lines and in two-sided lines is in the relevance of the sequence in which the tasks are performed. In one-sided lines, the sequence of the tasks within a workstation is not important as long as it verifies precedence constraints. However, in two-sided assembly lines, this is a crucial factor for an efficient assignment of tasks. Tasks at opposite sides of the line can interfere with each other through precedence constraints which might cause idle time if a workstation needs to wait for a predecessor task to be completed at the opposite side of the line. In a two-sided mixed-model assembly line, a set of similar models of a product is assembled, in any order and mix, by workers that perform assembly tasks on a set of assembly stations, each of which has a pair of workstations directly opposite each other (left and right side workstations). The particularity of two-sided lines is concerned with sequencing the tasks within each workstation, at both sides of the line, in a way that minimizes the compulsory idle time due to the phenomenon of interference. In a two-sided assembly line a task can be: (i) performed on either side of the line; (ii) required to be performed on a specific side of the line (left or right); (iii) required to be performed simultaneously with another task on the opposite side of the line, so that a pair of operators can collaborate: these tasks are called synchronous tasks and each one calls the other the mated-task. The main goal of this problem is similar to the one of straight mixed-model assembly lines described in Sec. 3.1, i.e., to minimize the number of workstations of the line envisaging simultaneously the additional goals of (i) balancing
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
403
the workloads between workstations and (ii) balancing the workloads within the workstations for the different models. 3.2.1. ACO based approach In the proposed ACO algorithm, fully described in the paper by Simaria and Vilarinho (2007), for the two-sided mixed-model assembly line balancing problem, 2ANTBAL, two ants “work” simultaneously, one at each side of the line. They will be called left-ant and right-ant if they work on the left or right side of the line, respectively, and side-ant more generally. The procedure starts by creating a subcolony with a pre-determined number of pairs of ants (to work on each side of the line). Each pair of ants collaborate in order to build a feasible balancing solution, i.e., an assignment of tasks to workstations on both sides of the line, in such a way that all constraints of the problem are verified (precedence, zoning, capacity, and synchronism). For each feasible solution obtained a measure of its quality is computed, according to the problem’s objective function. An outline of the way the two ants build a balancing solution is presented in Fig. 14. The procedure starts by initializing the current time of both side ants (ct(aS) is the current time of one side ant and ct(aS) is the current time of the opposite side
Figure 14.
Procedure to build balacing solution for two-sided lines.
March 15, 2010
404
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
ant) and it randomly selects one of the sides of the line to begin the assignment. Then, the corresponding side-ant opens a workstation and determines the set of available tasks, according to the conditions described in the following section. The available tasks are the set of tasks that can be assigned to a particular workstation starting at the current time. A task is available if it verifies all the following conditions: (i) the task side is the same as the current side or the task can be performed on either side; (ii) the task predecessors are assigned to an earlier time (if a predecessor is assigned to the opposite side it must be completed before the current time); (iii) the task assignment to the current workstation does not violate the capacity (i.e., cycle time) constraints; (iv) the task assignment to the current workstation does not violate zoning constraints; (v) if the task has synchronism constraints, it is possible to assign its mated-task to the opposite side of the line, starting at the same time. From the set of available tasks, a side ant must select one to be assigned to the current workstation, starting at the current time. The selection of tasks for assignment is similar to the one described for straight mixed-model assembly lines. In the proposed procedure, side-ants use a timeline to build the balancing solution. Every time a side ant assigns a task to a workstation, its current-time is increased an amount corresponding to the task processing time. Considering the mixed-model nature of the problem, this time will be the maximum processing time of that task for all models, in order to ensure that the cycle time is always met, regardless of the model being assembled. Then the current times of both side ants are compared, resulting in the following courses of action: (i) If the current time of the side ant is inferior to the current time of the opposite side ant (ct(aS) < ct(aS)), the assignment continues on the same side. (ii) If the current time of the side ant is superior to the current time of the opposite side ant (ct(aS) > ct(aS)), the side is changed. (iii) If the current time of the side ant is equal to the current time of the opposite side ant (ct(aS) = ct(aS)), a side is randomly selected to continue the assignment. When all tasks have been assigned to workstations, the balancing solution is completed and solution quality is evaluated using the objective function: Z = 10WE − Bb − Bw . 3.3. Flexible U-Shaped Assembly Lines The goal of the addressed problem is to design an assembly line flexible enough to cope with the different demand scenarios. The line is composed by a set of physical workstations with tools and equipment required to execute the set of tasks
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
405
assigned to each workstation. Given the assignment of tasks to the workstations, it is necessary to assign operators (workers) to the line and determine which tasks each operator will perform. The number of operators working on the line is determined by the required production rate. Higher production rates involve lower cycle times and, consequently, a larger number of operators, in order to meet the demand. According to Monden (1993), a flexible assembly system allows the adjustment to different demand volumes by simply reassigning the human resources, i.e., increasing or decreasing the number of operators in the line and defining the new set of tasks that each one has to execute. The operators are multi-skilled, meaning that they are able to perform a wide range of tasks. Also, an appropriate workstations’ layout is required in order to effectively implement this practice; usually, U-shaped layouts are used. Based on these assumptions, the addressed problem has two levels of decision: (i) Level 1: determine an assignment of tasks to physical workstations (and the corresponding equipment/tools) guaranteeing that the line is able to cope with the worst case demand scenario, i.e. the scenario where the production rate is maximum. (ii) Level 2: for each probable demand scenario, determine an assignment of operators to the tasks/workstations in a U-shaped layout, according to the configuration defined in level 1. A solution of the problem will consist in a fixed configuration of workstations, arranged in a U-shaped layout, each equipped with the necessary tools to perform the tasks assigned to it, and in a set of assignments of operators to tasks, one for each demand scenario. 3.3.1. ACO-based approach The proposed procedure is based on the previously described procedure ANTBAL and it works in two stages, one to address each of the decision levels described in the previous section. The interested reader is referred to Simaria et al., (2009). The first stage aims at finding an assignment of tasks to workstations, in such a way that the line is able to respond to all the demand scenarios. This means that the number of installed workstations should be sufficient for the line to attain the maximum production rate (of all the probable demand scenarios). The assignment of tasks to the physical workstations defines the equipment and tools that should be available at each workstation. In order to minimize the changes in the line whenever the demand scenario changes, we assume that the physical part of the line should remain fixed. Therefore, the only changes in the line will be the range of tasks performed by the (human) operators which may increase or decrease the total number of operators working on the line. The definition of the best assignment of operators for each demand scenario is done in the second stage of the proposed procedure. The goal of the second stage is to determine an assignment of operators to tasks such that the line configuration task/workstation defined in the first stage is
March 15, 2010
406
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
maintained and a U-shaped layout is used. The second stage is repeated for every demand scenario of the problem. The existing procedure ANTBAL was modified in order to produce solutions that meet the constraints of the problem, i.e. to produce assignments of operators to tasks/workstations (given by the solution of the first stage) in a U-shaped line layout. Given a fixed assignment of tasks to workstations (from the first stage) the goal of the new procedure, which will be called U-ANTBAL, is to assign human operators to the tasks for each demand scenario in such a way that an operator can work on both ends of the U-shaped line. The main difference between U-ANTBAL and ANTBAL is in the determination of the set of available tasks. In U-ANTBAL, a task is available to be assigned to an operator if (i) it verifies task-workstation assignments of the first stage; (ii) all its predecessors or all its successors are already assigned to an operator — this allows the assignment of an operator to tasks at both ends of the line, i.e. in a U-shaped layout; (iii) the assignment of the task does not exceed the operator’s capacity — the capacity of an operator is the cycle time for the specific demand scenario. For the worst case scenario, i.e., the scenario used in the first stage of the procedure, the solution of the second stage will have one operator performing all the tasks of a workstation, as the cycle time used in both stages is the same. For the remaining demand scenarios, there will be less operators than workstations; however, in some cases there might be more than one operator assigned to the same workstation. As this situation may cause congestion and confusion among the operators, it is desirable to minimize the probability of its occurrence. The procedure addresses this issue in two ways. First, when building the solution, it first assigns the operator to all the tasks of a determined workstation before it changes into another workstation. The second way is to add a term in the objective function computing the average number of operators per workstation (P ), given by S Pk (14) P = k=1 S where Pk is the number of operators working on workstation k and S is the total number of workstations. While the two terms of the objective function of the first stage are the minimization of the number of workstations and the workload balance between the workstations, the correspondent terms in the objective function of the second stage are related with the operators. Eop is the operator line efficiency for model m and demand level v, given by N tim op (15) E = i=1 P · Cmv where P is the number of operators working in the line.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
407
Similarly to the first stage, the goal of balancing the workloads between operators is reached by the minimization of function Bop , which varies between a minimum of zero, when the line idle time is equally distributed by all operators and a maximum of one when the line idle time is only due to one operator. Function Bop is given by B
op
P P do 1 2 = − P −1 IT P
(16)
o=1
where do is the idle time of operator o. The global objective function to maximize in the second stage is then Z = 10Eop − Bop − P
(17)
All the other features of the algorithm are similar to the ones of ANTBAL. 3.4. Production Line Balancing The production line balancing problem in which most of the workstations are machines has not received a great deal of attention from researchers in spite of it being an important issue in the design of certain types of manufacturing systems. For example, most manufacturing cells can be treated as production lines with a flow line-like behavior. Traditionally, in industries that produce a high variety of components in small batches, a process-oriented production system is used. In this type of system, functionally identical machines are grouped together in departments. Therefore, parts requiring processing by more than one machine type will travel between the several relevant departments to be manufactured. This type of production system is very flexible and leads to a high utilization rate of the resources, but also motivates a significant amount of material handling, high work-in-process inventories, and long throughput times and is difficult to control. On the other hand, when the company operates in a low variety and high volume environment, a product-oriented manufacturing system can be employed. In a product-oriented manufacturing system machines are placed in production lines dedicated to the manufacturing of a specific product, which leads to low material handling costs, work-in-process inventories, and throughput times and also production control is made easier. However, this type of system has a major drawback: the lack of flexibility. The increasing demand for personalized products has led to the development of other types of production systems that can accommodate a high flexibility together with the advantages of product oriented manufacturing systems. This has led to the proliferation of mixed-model assembly lines that can assemble simultaneously different models of a product, as explained previously. Also cellular manufacturing
March 15, 2010
408
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
has emerged as an alternative, which collects the advantages of both product- and process-oriented systems for a high variety and medium volume product mix (Burbidge, 1992). In many cellular manufacturing systems the intra-cellular flow pattern has a behavior similar to an unpaced production line in which the parts do not forcefully follow the same unidirectional flow. Most of the studies about unpaced lines and the related issues like buffer dimensioning are focused on assembly lines (Jeong and Kim, 2000; Malakooti, 1994). Some other authors consider production lines that include machines, but the analysis is directed to the operational phase of the line, and not to the design phase, and also to the evaluation of its performance (Dallery and Le Bihan, 1999; Gung and Steudel, 1999). In order to design and balance this type of cell, a simulated annealing (SA) approach was developed (Xambre and Vilarinho, 2005). It addresses the following issues: • minimization of the average idle time, thus reducing the amount of resources used in the production line; • workload balance between the different units within the same resource types; • workload balance between all the resources used in the line (note that solutions with a better balance result in smaller buffer sizes). 3.4.1. Problem definition A set of assumptions and notation must be considered when trying to address the problem at hand: (i) the operation sequence for each part is known; (ii) the production lot considered is defined as the minimum common multiplier between the different operations lots; (iii) the processing time (ti ) for the operation lot and the resource or resources needed to perform each operation (i = 1, . . . , T) is known; (iv) the production will be processed through a series of resources (r = 1, . . . , R) classified as workers and machines. It can be defined specifically as workers with particular skills and machines; (v) the number of resources available, for each resource type (R), is Mr (m = 1, . . . , Mr ); (vi) the cycle time C is defined in accordance to the demand, given a certain planning horizon, as long as C ≥ max {ti }. Otherwise C = max {ti }; (vii) the set of operations that cannot be performed before operation i is completed (Si , successors of operation i) is given by the precedence constraints for the production process; (viii) a successor of i cannot be assigned before task i is assigned;
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
409
(ix) an operation can only be assigned to one unit of a certain type of resource; (x) the T operations can be performed on different parts; Denoting the controllable variables as follows:
1, if operation i is assigned to mth resource of type r ximr = 0, otherwise the objective function used in the procedure can be defined as 2 Mr R R 1 imr 1 1 Mr · Min Z = Ir + − K·C R Mr − 1 Ir Mr r=1 r=1 m=1 2 Mr R imr 1 K − + · R K−1 K r=1 Ir
(18)
r=1 m=1
In the first term of expression 1, Ir represents the total idle time for each resource type: Mr T
Ir = Mr · C −
ti · ximr
i=1 m=1
and K=
R
Mr
r=1
represents the total number of resources used in the production line. By minimizing that first term, the line inefficiency average, the number of resources used in the production line is minimized accordingly. The objective is to use the least possible number of units of each type of resource. As for the second term of the equation, it measures the workload balance between the different units within the same resource types. The development of this second term was based on the concepts of line balancing problems, more particularly in the objective function used in a model proposed by Simaria andVilarinho (2001). In this term of the expression is, as stated previously, is the total idle time for each resource type, and: Imr = C −
T
ti · ximr
i=1
represents the idle time in unit m of resource type r. Finally, the third term of the expression measures the workload balance between all the resources used in the line. By measuring the unbalance between all of the
March 15, 2010
14:45
410
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
resources, the solutions that have a better balance and consequently result in smaller buffer sizes are preferred. Due to the complexity of this problem and also the need to construct and test several different solutions, a simulated annealing procedure was developed as shown in the next section. 3.4.2. Simulated annealing procedure The simulated annealing (SA) algorithm was initially presented by Kirkpatrick et al. (1983), and attempts to solve hard combinatorial optimization problems through controlled randomization. The SA procedure can be presented succinctly as shown in Fig. 15. Basically, the algorithm evolves from an initial solution for the problem, S0 . In the inner cycle of the algorithm, repeated while n ≤ L, a neighboring solution Sn of the current solution S is generated. If Sn is better than S ( ≤ 0), then the generated solution replaces the current solution. Otherwise, the solution is accepted with a certain probability (p = e−/T ). The value of the temperature T decreases in each iteration of the outer cycle of the algorithm, which diminishes the probability of accepting as current solution worst solutions. Obviously, during the algorithm the best solution found (S ∗ ) is always kept and the generation of neighboring solutions obliges that two consecutive solutions must be different (Sn = Sn−1 ). The most important characteristic of this algorithm is the possibility of accepting worst solutions, which can allow it to escape from local minima. Nonetheless, the performance of the algorithm depends on the definition of several control parameters: the initial temperature (T0 ), the temperature reducing function, the length Select T0 yes STOP Initial Solution S0=S=S*
Stopping criteria ?
Reduce T yes
no no
n=L? Counter n=1 n=n+1 S=Sn Neighboring solution Sn yes no no D=f(Sn)-f(S)
D=0?
S=Sn with p=e-D/T
f(Sn)
Figure 15.
SA procedure.
S*=Sn
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
411
of each temperature level (L), and the stopping criterion. Naturally each of these control parameters must be refined according to the specific problem on hand. Two other important issues that need to be defined when adapting this general algorithm to a specific problem are the procedures to generate both the initial solution and the neighboring solutions. The initial solution is created by using a greedy heuristic rule. The tasks are assigned to the required resources according to the precedence restrictions and using a largest processing time type rule. Neighboring solutions derive from choosing random operations and changing them to another unit of the same resource or by exchanging two randomly chosen operations. Naturally these movements can originate non-admissible solutions (because of the precedence restrictions) so each new solution must be checked to verify if it is admissible. Each of this type of movement has a 50% chance of occuring. The stopping criteria that were defined considered that if 85% of the generated solutions are rejected in five consecutive temperature levels, then the probability of replacing the best solution found is very small. The procedure is then terminated. This model completes the model management subsystem included in the DSS. With this set of procedures, the user of the system is able to address different types of assembly and production line balancing problems that may arise in an industrial facility.
4. Conclusions In this chapter, a decision-support system for balancing assembly and production lines was presented. The need for the system derived from the current market atmosphere that often leads to frequent changes in the allocation of manufacturing resources. As a result, the decisions related to manufacturing systems design, namely, assembly and production line balancing are now addressed more frequently, and thus require tools to support them. Therefore, a set of models for the design of assembly and production lines were incorporated into a decision-support system that can help the decision maker not only to interact with the models, but also to generate and compare different solutions to his problem. The system is directed at specialist users, with a good understanding of the problems and algorithms. Nonetheless, it provides help to the decision-maker in the selection of the most suitable production system configuration according to the characteristics of the problem. The user-friendly interface allows an easy and intuitive utilization of the system as well as a quick evaluation of the system’s data and outputs. The system has been used to aid the design and re-design of assembly lines in several industrial companies in collaboration programs between the University of Aveiro, Portugal, and some regional companies.
March 15, 2010
412
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
Future research should include the development of a visual interactive simulation module to include in the DSS in order to capture the dynamic behavior of the assembly or production line and address the variability associated with task processing times and resources utilization. Acknowledgments This research was supported by Funda¸ca˜ o para a Ciˆencia e a Tecnologia, Programa Operacional Ciˆencia e Inova¸ca˜ o 2010 and FEDER (POCI/ECO/60356/2004). References Becker, C and A Scholl (2006). A survey on problems and methods in generalized assembly line balancing. European Journal of Operational Research, 168, 694–715. Burbidge, JL (1992). Change to group technology: Process organization is obsolete. International Journal of Production Research, 30, 1209–1219. Dallery, Y and H Le Bihan (1999). An improved decomposition method for the analysis of production lines with unreliable machines and finite buffers. International Journal of Production Research, 37, 1093–1117. Dorigo, M and L Gambardella (1997). Ant colony system: A cooperative learning approach to the traveling salesman problem. TR/IRIDIA/1996-5 Universit´e Libre de Bruxelles, Belgium. Dorigo, M, V Maniezzo and A Colorni (1996). The ant system: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics — Part B, 26, 1–13. Ghosh, S and RJ Gagnon (1989). A comprehensive literature review and analysis of the design, balancing and scheduling of assembly systems. International Journal of Production Research, 27, 637–670. Gung, RR and HJ Steudel (1999). A workload balancing model for determining set-up time and batch size reductions in GT flow line workcells. International Journal of Production Research, 37, 769–791. Jeong, K-C and Y-D Kim (2000). Heuristics for selecting machines and determining buffer capacities in assembly systems. Computers & Industrial Engineering, 38, 341–360. Kim, YK, Y Kim and YJ Kim (2000). Two-sided assembly line balancing: A genetic algorithm approach. Production Planning and Control, 11, 44–53. Kirkpatrick, S, C Gelatt and MVecchi (1983). Optimization by simulated annealing. Science, 220, 671–680. Larman, C (2004). Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development. Prentice Hall. Leu,Y, LA Matheson and LP Rees (1994). Assembly line balancing using genetic algorithms with heuristic-generated initial populations and multiple evaluation criteria. Decision Sciences, 25, 581–606. Malakooti, BB (1994). Assembly line balancing with buffers by multiple criteria optimization. International Journal of Production Research, 32, 2159–2178.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
413
Monden, Y (1993). Toyota Production System. Norcross: Institute of Industrial Engineers. Scholl, A (1999). Balancing and Sequencing of Assembly Lines. Heidelberg: Physica Verlag. Scholl, A and C Becker (2006). State-of-the-art exact and heuristic solution procedures for simple assembly line balancing. European Journal of Operational Research, 168, 666–693. Simaria,AS and PMVilarinho (2001). The simple assembly line balancing problem with parallel workstations — a simulated annealing approach. International Journal of Industrial Engineering, 8, 230–240. Simaria, AS and PM Vilarinho (2004). A genetic algorithm based approach to the mixed model assembly line balancing problem of type II, Computers & Industrial Engineering, 47, 391–407. Simaria, AS and PM Vilarinho (2007). 2-ANTBAL: An ant colony optimisation algorithm for balancing two-sided mixed-model assembly lines. Computers & Industrial Engineering, 56, 489–506. Simaria, AS, M Zanella de S´a and PM Vilarinho (2008). Meeting demand variations by using flexible U-shaped assembly lines. In press, International Journal of Production Research, 47, 3937–3955. Turban, E and J Aronson (1998). Decision Support Systems and Intelligent Systems, Prentice Hall. Vilarinho, PM and AS Simaria (2002). A two-stage heuristic method for balancing mixedmodel assembly lines with parallel workstations. International Journal of Production Research, 40, 1405–1420. Vilarinho, PM and AS Simaria (2006). ANTBAL: An ant colony optimization approach for balancing mixed model assembly lines with parallel workstations. International Journal of Production Research, 44, 291–303. Xambre, AR and PM Vilarinho (2005). Balancing production lines. In Proceedings of the 35th International Conference on Computers & Industrial Engineering, 2083–2088.
Biographical Notes Ana Sofia Simaria is currently a Research Associate at the Department of Biochemical Engineering of University College London (United Kingdom), working in the development of combinatorial optimization models for the bioprocessing industry. She has an undergraduate degree in Industrial Engineering and Management (University of Aveiro, Portugal), an MSc in Quantitative Methods for Management (University of Porto, Portugal) and a PhD in Industrial Management (University of Aveiro, Portugal). She has published in leading journals of the industrial engineering area like International Journal of Production Research, Computers & Industrial Engineering and International Journal of Industrial Engineering. Ana Raquel Xambre holds an undergraduate degree in Industrial Engineering and Management (University of Aveiro, Portugal), a post-graduate course in Quantitative Methods for Management (University of Porto, Portugal) and is currently finishing her PhD in Industrial Management (University of Aveiro, Portugal). She is
March 15, 2010
414
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
a lecturer at the Department of Economics, Management and Industrial Engineering of the University of Aveiro, where she teaches Operations Management and Simulation. Her research interests include operations management and operational research. Nelson Saldanha Filipe holds a MSc in Computer Engineering and Telematics (University of Aveiro, Portugal). In his Master thesis project he integrated the RoboCup middle-size league soccer team: CAMBADA (IEETA, University of Aveiro, Portugal), developing high level team coordination and strategy. The team attended the RoboCup 2008 World Championship at Suzhou, China where it conquered 1st place and became World Champion for the first time. His research interests include artificial intelligence, computer graphics and game design. Pedro Manuel Vilarinho is currently Project Manager at COTEC Portugal, being responsible for different initiatives namely those that aim at fostering an entrepreneurial culture among Portuguese undergraduate and graduate students and those leading to the creation of high tech / high-growth ventures. He is on leave from the University of Aveiro where he is Assistant Professor at the Department of Economics, Management and Industrial Engineering. He has an undergraduate degree in Electronics and Telecommunications Engineering from the University of Aveiro, an MSc in Computer Science in Industrial Engineering from the University of Coimbra and a PhD in Industrial Engineering from the University of Porto. His main research area is operations management, publishing regularly in international scholarly journals like the International Journal of Industrial Engineering of which he is a member of the Editorial Board.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
Chapter 17
An Innovation Applied to the Simulation of RFID Environments as Used in the Logistics MARCELO CUNHA DE AZAMBUJA∗,‡ , CARLOS FERNANDO JUNG†,§ , CARLA SCHWENGBER TEN CATEN†,¶ and FABIANO PASSUELO HESSEL∗, ∗ Pontif´ıcia Universidade Cat´ olica do Rio Grande do Sul (PUCRS), Ipiranga 6681, Pr´edio 30, Bloco 4, Porto Alegre, RS, Brasil † Universidade Federal do Rio Grande do Sul (UFRGS), Osvaldo Aranha 99, 5 andar, Porto Alegre, RS, Brasil ‡
[email protected] §
[email protected] ¶
[email protected] [email protected]
This chapter presents the results of experimental research for the development of an innovative product designated as the software for the Simulation of Radio Frequency Identification (RFID) Environments, or RFID-Env. This apparatus is designed for use by professionals in computer systems and plant engineering who are engaged in the research and development of RFID systems as applied to the management and operation of logistic supply chains. The RFID-Env makes it possible to simulate on computer screens a complete RFID environment by processing user data on the technical and physical characteristics of real or virtual RFID environments. Information output can include descriptions of the performance to be expected from a given configuration and detailed reports as to whether that particular configuration will succeed in reading all the RFID tags flowing through a defined system. In arriving at these results, the RFID-Env considers the anti-collision communication protocols utilized by the tags, the quantity of tags to be read in a given period, the temperature at the location, the distance between the tags and reader antennas, and the velocity of exposure of the tags to readers. The software required for these results is built into the RFID-Env and includes a library of communication protocols (ACPL, or Anti-Collision Protocol Library) covering the four RFID International Standard Organization (ISO) standards most frequently used (ISO 18000-6) in the market. Keywords: Radio Frequency Identification (RFID); simulation of RFID environments; anticollision algorithms to electronic tags; ISO 18000-6 standards; EPCGlobal Gon2 standard.
415
March 15, 2010
416
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
M. C. de Azambuja et al.
1. Introduction The industry’s major challenge in logistic chains is the need to constantly optimize processes to produce goods or services as quickly and efficiently as possible, at the right place and precisely as desired by clients (Rosa, 2006). Nowadays, the variety of products offered in the marketplace adds to the complexity of managing the flow of information along the productive supply chain and obliges manufacturers to introduce new technologies to facilitate logistical operations (Ngai et al., 2005). Radio Frequency Identification (RFID) is attracting attention and interest from industrial and commercial enterprises because the system has the potential to simplify the process and improve the efficiency of automatic product identification (Prado et al., 2006). Brock (2008) states that the principal component of RFID technology is the intelligent ticket or tag, which is affixed to the product. The information electronically written on the tag is read by electromagnetic radiation and passed to a radio transmitter, where a radio frequency (RF) carrier transmits the data wirelessly to a distant receiver (reader) capable of interpreting and registering the information (Ygal et al., 2006). Prado et al. (2006) consider that the RFID system is made up of basically three elements, which are (i) the tickets or tags; (ii) the electromagnetic and data readers, and (iii) a series of computer programs. Brock (2008) agrees with this, adding that the operation of the RFID system depends on an electronic ticket that is affixed to each product and has a unique digital identity. This identity is known as the product code or Electronic Product Code (EPC). When the tag is interrogated by the external electronic reader, the data recorded in the ticket’s memory are recovered and transmitted. This memory consists of an integrated circuit or microchip and has the capacity to store a great deal of information such as, amongst others, (i) the electronic code of that particular product; (ii) the product reference number; (iii) the respective production data; (iv) delivery date; (v) validity period; and (vi) information on the supplier (Atkinson, 2004). The RFID technology can be utilized in many different ways and the field of its application is growing exponentially (Brock, 2008). Among the improvements that the system can provide in logistical operations, Prado et al. (2006) emphasize (i) greater availability of products; (ii) better profit margins due to cost reduction; (iii) improved worker operational efficiency; (iv) reduction in inventory losses; (v) reduction in stock levels; (vi) reduced technical assistance costs; and (vii) better industrial or commercial layouts of the installations. Ygal et al. (2006) agree and state that if conventional business processes are compared with those RFID technology, the new technology’s impact will be seen, principally at the strategic level. Effects include (i) the development of new business models; (ii) the integration of activities; and (iii) the reengineering and automation of the older processes, thereby facilitating B2B commerce.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
An Innovation Applied to the Simulation of RFID Environment
417
However, in spite of the many promising applications of RFID technology in the supply chain, various technological difficulties prevent the large-scale use of this system (Prado et al., 2006). The technology has functional problems that have been widely discussed by the software developers and production engineers and which have required constant research to develop technological improvements. These problems are (i) “collisions” caused by simultaneous communication of the readouts of two or more tickets; (ii) electromagnetic interference; (iii) insufficient range of the RF carriers; (iv) difficulties in finding the ideal positioning; and (v) the number of antennas required for the electronic readers (Cheng and Jin, 2007; Hassan and Chatterjee, 2006; Myung and Lee, 2006). Considering these problems, this article presents the results of experimental research conducted to obtain data for the development of an innovative product: the Software for Simulation of RFID Environments (RFID-Env). This product is designed for use by developers of computer system applications and research and development (R&D) engineers working on RFID problems and makes it possible to simulate a complete RFID environment away from the factory floor. The remainder of the article is organized in the following manner: Section 2 presents the methodology and the procedures used in the research while Section 3 describes the proposed system and an experimental-simulated application. Section 4 demonstrates the results obtained by describing the physical parameters considered in the system and the kind of environments that can be simulated, such as “conveyor mode” or “portal mode”. Finally, (iv) Section 5 lists the conclusions of our study. 2. Methodology Software development is classified as R&D and involves the realization of scientific and technological advances in a systematic manner (OCDE, 2007). Our research was of an experimental nature and resulted in the construction of a prototype. A prototype is defined by OCDE (2007) as an original model that includes all the technical characteristics and functions of the new product. 2.1. Methodological Procedures The development of the RFID-Env was based on high-level abstract models and can represent all the parameters, enabling the developer to configure tests in the way best suited to the environment to be simulated. The software includes a library of communication protocols called the Anti-Collision Protocol Library (ACPL) that covers the four International Standard Organization (ISO) standards for the area (ISO 18000-6), which are (i) ALOHA LST; (ii) ALOHA FST; (iii) Btree, and (iv) Random Slotted (Q Algorithm). In addition, the protocol library has a copy of Calculated Q, which is an improved edition of Random Slotted, the most recent version of the ISO standards. The software was written in the Java programming
March 15, 2010
418
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
M. C. de Azambuja et al.
language and the developer can readily extend coverage by using new communication protocols. Proper operation of the RFID-Env was confirmed by operation under various test conditions with the different protocols proposed by the ISO. Efforts to use RFID in the control of supply chains and consumer goods have been mostly directed at Ultra High Frequency (UHF) passive tags (ISO 18000-6) (Borrielo, 2005; Curtin et al., 2007; Hassan and Chatterjee, 2006; Weinstein, 2006). This is due to their size, their reading capacity at distances of about 5 m, and the ability to control their reading area by adjusting the antennas direction and the interrogator’s configuration. For this reason, our development concentrated on the physical characteristics and anti-collision algorithms used in these tickets. In the simulated environment developed, we considered the anti-collision communication protocol impressed on the tickets, the quantity of tickets to be read in a specified time, the ambient temperature, the distance between the tickets and the reading antennas, and the speed at which the tickets passed by the readers. To use RFID-Env, the operator merely has to enter the physical and technical characteristics of the RFID environment, and the simulator will generate reports predicting whether all the tickets will be read correctly for that particular configuration. 2.2. The Proposed System 2.2.1. Description When the RFID-Env software simulates the reading of ISO 18000-6 standard type A, B, and C tickets and their respective anti-collision communication protocols, it respects the manner in which these devices work — there are processes involving interrogator operation and others related to ticket functioning. Some few processes were implemented specifically for the simulation, such as the process that generates the unique code identifier, or Unique Identifier (UID), on each ticket. In a typical real-life system, the tickets already have a UID value at the moment that the protocols are executed, whereas in RFID-Env a random number generator produces initial simulated UID numbers and allocates to each ticket the corresponding simulated UID value. This can vary between 16 bits (in the ISO 18000-6 C) and 64 bits (in the ISO 18000-6 B) (ISO/IEC 18000-6 2006a,b). During the process of identifying the tickets, the protocol may use only a portion of the data contained in the memory on the ticket. In the ISO 18000-6 A (2006a), a Sub-Identifier (SUID) of 40 bits is transmitted. However, in the ISO 18000-6 B (2006b), the whole UID of 64 bits is sent, and in the ISO 18000-6 C a random value of 16 bits called the RNI6 is sent out exclusively for the anti-collision process. The work environment of the RFID-Env is divided into three windows: Simulator, Single Mode and Portal Mode. On the initial screen (see Fig. 1), the user specifies which anti-collision protocol he/she wishes to use in testing a number of tickets in an environment. Depending on the protocol selected, he provides some specific parameters, such as the starting sizes of the frames in ALOHA-type protocols utilized by the ISO 18000-6 A and C standards. The user may also select the total number of
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
An Innovation Applied to the Simulation of RFID Environment
Figure 1.
419
Initial window of RFID-Env and selection of the type of environment.
executions (to facilitate the generation of statistical averages of the results) and the format of the printout (which is generated by an ACIII file). On the initial screen, one can select the Advanced Simulation option, where the user can select a system operational mode simulate: Single Mode or Portal Mode. If the user only wants to test if the algorithms are functioning, this option may be ignored together with the respective tabs. But for tests where the physical characteristics of the ticket (for example, whether the material of the ticket base is plastic or wood) and environmental variations (such as the quantity of the readers and antennas, the speed of the passage of the tickets in front of the antennas, and temperature) are to be considered, the user must in this case select either the Single or Portal Mode. Figure 2 shows the Portal Mode screen after the user has selected Advanced Simulation, and the options corresponding to this mode.
Figure 2.
Portal Mode screen of RFID-Env.
March 15, 2010
14:45
420
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
M. C. de Azambuja et al.
As soon as the user inputs the number of tickets present in the environment, the UID generation process is executed and a unique code is attributed to each ticket the virtual environment. The environment generated in the RAM memory and the five principle stages of the simulation process are illustrated in Fig. 3.
Figure 3.
Simulation environment and the initial stages of RFID-Env operation.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
An Innovation Applied to the Simulation of RFID Environment
421
When execution starts, the user is requested to input the number of tickets to be simulated (see Stage 1 in Fig. 3). With this information, the software creates a slot in the memory where values will be stored for each ticket (Stage 2). When this is complete, the process of generating the UID for the tickets commences (Stages 3 and 4). At Stage 5, the process that runs the specific algorithm of the anti-collision protocol to be simulated is initiated and begins to interact with the tickets. The interrogator process starts up simultaneously.
2.2.2. Experimental operation Having obtained the inputs on the type and quantity of the RFID sensors, the system realizes a simple simulation of reading the tickets by using the anti-collision protocols. As an example, we will demonstrate the report of a simulation of reading 10 tickets using the ISO 18000-6 B (Btree Anti-Collision Protocol) standard. The algorithm utilized by the Btree is significantly different from its ALOHA counterparts in that it does not include the concept of the communication frame size, or the round size (ISO/IEC 18000-6, 2006a). In the other three ISO protocols, the dimensions for the initial round sizes are required, but the Btree approach has the advantage of eliminating this necessity. While the algorithms utilized by the ISO 18000-6 A LST and FST standards return very poor performances when reading more than 256 tickets (this is the maximum round tree size), the Btree does not suffer from this limitation (ISO/IEC 18000-6, 2006a,b; Shih et al., 2006). On the other hand, the Btree utilizes a concept where the first iterations of the algorithm — that is, the first communications between the tickets and the reader — tend to generate many collisions, as can be seen in the RFID-Env utilization example presented below. However, when the number of tickets to be simulated is provided manually, the RFID-Env executes the algorithms and generates the outputs and totals in the final simulation results at the end of the report. (See Fig. 4). By analyzing the outputs generated by RFID-Env as shown in Fig. 4, the algorithm utilized by Btree can be better understood. In the first iteration, the 10 tickets in the environment try to transmit their information, causing collisions among the signals directed to the readers that impede the reading. Normally, the first reaction of all the tickets after a failed first attempt to communicate is to randomly select a value of 0 or 1. The tickets that receive a 1 at this time increase their slot counter and may try to re-transmit when this counter contains 0 once again. Those tickets that get a zero do not increase their counters and continue to have an opportunity to transmit their data at the next process iteration. If more than one ticket gets a 0 (such as in the second iteration in Fig. 4), these tickets must once again draw for a 1 or 0. Those tickets that already have a 1 in the counter (the other three tickets in the environment) must again increase the value by one more unit.
March 15, 2010
422
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
M. C. de Azambuja et al. -----------------|BTREE|-----------------Iteration #1, Tags that replied: E0E069FEC02185EC20 | E0B4909015F4627A76 E08C1013BF9FCB2BAE | E059592730460AF424 E0BB6B2EA85B402E70 | E0D49AC0DD20AEE2E7 E041E50008FC2F6965 | E096E4892AAEA7DCF9 E03783A6E0D9449A7D | E0A77048EDBEF83434 --Iteration #2, Tags that replied: E059592730460AF424 | E0BB6B2EA85B402E70 E0D49AC0DD20AEE2E7 | E041E50008FC2F6965 E096E4892AAEA7DCF9 | E03783A6E0D9449A7D E0A77048EDBEF83434 --Iteration #3, Tags that replied: E041E50008FC2F6965 | E03783A6E0D9449A7D E0A77048EDBEF83434 --Iteration #4, Tags that replied: E0A77048EDBEF83434 --Iteration #5, Tags that replied: E041E50008FC2F6965 | E03783A6E0D9449A7D ... --Iteration #11, Tags that replied: No tag replied ... --Iteration #27, Tags that replied: E08C1013BF9FCB2BAE ---------- Performance Report --------Tags: 10 Iterations: 27 Iterations with tag collision: 13 Iterations with no tag reply: 4
Figure 4. Report of execution of the Btree protocol, with 13 collisions, 4 empty slots, and a total of 27 slots utilized for communication of 10 RFID tickets.
These steps are repeated until only one ticket has a 0 and all the others a 1 — which happens in the example at iteration #4. The many collisions taking plane in the first iterations of the Btree algorithm is typical of this protocol. While the iterations proceed, the ticket counters increase and little by little, the transmissions normalize. At the bottom of Fig. 4, the Performance Report shows that 27 iterations between all tickets and readers were necessary for the 10 tickets to succeed in transmitting their data. 13 iterations (slots) were occupied with collisions, and 4 slots remained vacant because no ticket had a zero in its counter when these iterations took place. 3. Results 3.1. Physical Parameters Consider by the RFID-Env System This section presents the physical parameters considered by RFID-Env, the way they are analyzed and the influence of these parameters on the final results of the simulations.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
An Innovation Applied to the Simulation of RFID Environment
423
3.1.1. Maximum reach for reading, velocity, and total exposition time The maximum distance for proper reading of the ticket varies because of the following principal factors: (i) the rapidity of the ticket’s passage (ii) the material onto which the ticket antenna is mounted; and (iii) the presence of magnetic and physical interference the environment. On average, the UHF tickets can operate at distances of 5 m between the ticket and the interrogator antenna, but the average may vary between 3.65 and 10.66 m, depending on the frequency of operation and the ticket material (Cheng and Jin, 2007; Friedrich, 2005; INTERMEC, 2007). The RFID-Env system considers these values in the following manner: given the average read reach distance of the tickets and the moving speed of the ticketed product, we can calculate how long each ticket is exposed to the interrogator. From the values of reading speeds found in the references in ISO/IEC 18000-6 (ISO/IEC 18000-6, 2006a), Total exposure time ≥ [time to read (y bits ∗ n tags)]
(1)
ISO/IEC 18000-6 (2006b), Friedrich (2005) and ATMEL (2007) and the quantity of bits to be read from each ticket, the following formula can be solved: Total exposure time is the total exposure time for the group of tags to reach the interrogator reading distance; y bits is the amount of bits of each tag; n tags is the amount of tags to be read. To calculate the total time that each ticket is exposed to the interrogator, it is first necessary to determine the maximum reading reach to the right and left of the interrogator antenna. Considering the situation of an environment based on a conveyer belt moving from left to right and with an antenna pointed directly at the tickets, it can be seen that a point exists (T1 on Fig. 5) where the tickets enter the leading edge of the interrogator reading window. Point T2 is the exact front center of the interrogator and is half the distance traveled by the ticket within its reading area. As a corollary, one can say that a point T3 also exists to the right of T2 , where the ticket moves out of the reading range of the interrogator. It is the sum of these two distances that provides the total exposure in meters (m) of the group of tickets
Figure 5.
Calculation of the time and distance of exposure.
March 15, 2010
424
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
M. C. de Azambuja et al.
to the reader. Dividing this value by the velocity (m/s) of the ticket, one can obtain the total period of exposure. This is the time x cited in Equation (1) in which the interrogator should succeed in reading all the bits of all the tickets on each of the moving packages. RFID-Env users need to supply somewhat simpler information about the environment, since this is generally known in advance. Figure 6 shows the input screen in Single Mode, where the following information is required: (i) temperature; (ii) ticket material; (iii) maximum reading reach; (iv) speed of ticket reading; (v) speed of movement of the tickets; and (vi) frontal distance between the interrogator antenna and the tickets (that is, the distance (c), or the minor cathetus of the rectangular triangle shown in Fig. 5). It should be noticed that the calculation of the total time necessary for reading a group of tickets is performed after the simulation of a complete reading of a group of tickets with an anti-collision protocol selected on the initial RFID-Env screen. Given the total number of slots necessary for reading the group of tickets, the simulator will calculate: t total = total number of slots × each slot reading time.
(2)
t total is the total reading time for a group of tags; total number of slots is the number of slots generated in a simulation; each slot reading time is the amount of bits to be read in each tag divided by the reading speed value of the tags. With the information obtained from the interface in Single Mode, RFID-Env can calculate the distance of the cathetus (a), in view of the fact that the maximum
Figure 6.
Data for the analysis of the environment in Single Mode.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
An Innovation Applied to the Simulation of RFID Environment
425
ticket reading distance is the length of the hypotenuse (b). The frontal distance between the interrogator antenna and the tickets refers to the cathetus distance (c). This configuration of the environment represents a geometric situation in the form of a right-angle triangle whose measurements may be calculated using Pythagoras’ Theorem. In Fig. 5, the formula used by RFID-Env for calculating the cathetus (a) is the following: a2 = b2 + c2 .
(3)
The interface of Fig. 6 shows the type of material selected (plastic) and the maximum distance of 4 m typical for this material (INTERMEC, 2007). Depending on the material selected by the user, the reading distance of that material is shown in the Tag Read Range box. But as this value may vary from manufacturer to manufacturer and for other physical reasons, this distance is merely suggested by RFID-Env and the user can alter it to his/her convenience to align it with the technical information provided by his/her ticket supplier. This applies also to the ticket-reading speed — RFID-Env suggests a speed of 40 kbs but the manufacturers’ figures may be different. Given this information, the RFID-Env can predict situations where it would be physically impossible to read a stated number of tickets. An example is given in the report shown in Fig. 7 — here the information supplied by the user was: (i) maximum reading distance (hypotenuse) of 4 m; (ii) reading speed 40 kbs; (iii) ticket movement of 5 m/s; and (iv) frontal movement (cathetus c) of 2 m. This data referred to a group of 1000 tickets (e.g. a box containing 1000 products moving on a conveyer belt) using the ISO 18000-6 protocol. With this configuration, RFID-Env would state that the system might not work. Some potential solutions to the problem detected by the simulator and described in Fig. 7 include (i) changing the communication protocol to reduce the number of slots necessary; (ii) reducing the number of tickets to be read “simultaneously” (i.e., the number of tickets in one package); (iii) slowing down the transport speed; -------------------|ISO 18000-6 C Protocol |------------ Performance Report -Tags: 1000 Slots needed for all tags to reply: 2822 Slots with tag collision: 727 Empty Slots (with no tag reply): 1095 Total required time to read all tags: 2.82 seconds (worst case) Calculated Total Exposition Time: 1.38 seconds ------------------------------------------------------With this configuration the group of tags could not be fully read. -------------------------------------------------------
Figure 7.
RFID-Env Red Alert — Reading is impossible for physical reasons.
March 15, 2010
426
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
M. C. de Azambuja et al.
(iv) altering the distance between the tickets and the reader; or finally (v) a combination of some or all of the above. 3.1.2. Temperature The environmental temperature has a profound effect on the duration that the registers can store the “0” and “1” logic values (ATMEL, 2007). Basically, the registers can increase their storage time when the temperature falls below 25◦ but above that temperature, the register’s storage capacity is reduced by more than 8 s. In addition, operation may cease altogether at some temperatures and RFID-Env is designed to consider this factor. Thus, if the user specifies a value outside the temperature limits supported by typical circuits, an Alert Signal generated by RFID-Env informs the user of this. As development improvement work continues on RFID-Env, the instrument will be enabled to take into consideration the time that register values can be maintained at different temperatures. This information may be related to the read speed of each ticket, the length of time the value is exposed to the electronic reader, the number of tickets present in the environment, and physical interferences which cause the tickets to be without power for brief periods. Because of these relationships, environmental temperature may strongly influence simulation results. 3.2. Configuration of the RFID-Env for the Type of Environment Taking into consideration some of the typical environments in which RFID systems are installed, the RFID-Env user can simulate the reading of a group of tickets in three ways: (i) by ignoring possible variations in the environment (Simulator Mode); (ii) conveyer belt mode with an antenna (Single Mode); and (iii) portal mode for pallets (Portal Mode). The process of simulation starts in the principal window as shown in Fig. 1. If the user only wants to simulate the operation of a one or more anti-collision protocols without considering the physical variations in the environment, only this interface needs to be used. While still in the principal window, it is possible to visualize the selection field for the type of environment: (i) None (simulation without environmental variables); (ii) Single Mode; or (iii) Portal Mode. The respective tabs are activated or not in accordance with the mode selected. If a user desires to test a conveyer belt environment, he/she should select Single Mode. This will cause the respective window to be activated and information specific to that kind of environment to be requested such as (i) temperature in degrees Celsius; (ii) type of material onto which the RFID chip and ticket antenna are mounted; (iii) read speed (in kbs) of the ticket in use in the environment; (iv) speed of movement of the products on the conveyer belt in m/s; and (v) the frontal distance (in meters) between the antenna and the conveyer belt. As soon as the type of ticket material is selected, the corresponding value in meters will
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
An Innovation Applied to the Simulation of RFID Environment
427
appear in the box provided for the typical value of read reach (between the antenna and the ticket) for that type of material. For example, when the user selects glass, the number 2 will appear in the box, the number 4 for plastic, and so on. These values, which are typical for various manufacturers, were obtained from references by Cheng and Jin (2007), Friedrich (2005) and INTERMEC (2007). In any case, these are values suggested by the RFID-Env system as a reference guide for the user, and he/she can easily alter them to reflect the real values obtained from the manufacturer’s technical specifications for the tickets actually being used.
3.3. Portal Mode — Quantity of Antennas versus Measurements of Portal and Pallets As shown in Fig. 2, the RFID-Env Portal Mode is designed to simulate environments with portals in the form of tunnels through which pallets and other transporter packages can pass. In addition to the information required for the Single Mode (distances, temperature, and read speed), the Portal Mode needs the dimensions of the Portal and of the pallets carrying the ticketed products. The user should input also the number of interrogators and antennas in the portal. Usually, each interrogator is limited to four antennas. With respect to the antennas located in the portal, the user may select the position of each antenna (a maximum of eight antennas simultaneously if the option of two interrogators was chosen). The simulator should be provided with the approximate direction of the antennas in relation to the total number of tickets to be read, assuming that all the ticketed packages are uniformly distributed over the transporter. That is, if the user selects an antenna in each face of the portal (left, right, and upper), the simulator will consider that n tickets are distributed facing these three directions equally, which would mean n/3. From the measurements of the portal and the pallet, the simulator can determine the lateral and superior distances between the pallet and the interrogator antennas selected as active by the user. In the present version of RFID-Env, the Portal Mode operates in the same way as the Single Mode, but with the difference that in the Portal Mode, the number of tickets to be measured is divided by the number of active antennas. This division is performed by the simulator in accordance with number of directional antennas located in each part of the portal as described in the previous paragraph. The lateral antennas work in exactly the same way as in the Single Mode, but the upper antennas are angled differently from the lateral antennas, although they too are positioned in a right-angled triangle formation. The advantages of a greater number of antennas in the portal are evident: an increase in the range of action in various directions and a reduction in the total number of tickets captured by each antenna. For the situation demonstrated in Fig. 7 (where the simulator found that it is impossible to read the tickets in that environment), one can make a simple calculation where, for instance, the 1000 tickets could be divided
March 15, 2010
428
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
M. C. de Azambuja et al.
between eight antennas. This would drastically reduce the number of simultaneous readings required from the system. As the development of RFID-Env continues, the parameters of analysis of the Portal Mode will be amplified, for example, by a more detailed consideration of the focus and positioning of the antennas, and of the antenna type itself. Some antennas are designed to operate over greater distances though only in a straight line, while others are better at reading wider angles. Greater attention will also be given to the physical characteristics of the ticketed products and the interference these materials cause by scattering the RF signal. It is known that the products in the center of the pallet present the greatest reading difficulties due to the interference re-transmitted from the surrounding products. 4. Conclusions This chapter describes the results of experimental research undertaken to develop an innovative product — the Software for Simulating RFID Environments, or RFIDEnv. The product is intended for use by developers in the computer sciences, and by engineers doing R&D for the solution of RFID problems. RFID-Env makes it possible to simulate a complete range of virtual RFID environments so that R&D can proceed in a non-factory environment. The functionality and the applications of RFID-Env software were demonstrated, showing how the product permits simulation of the operation of four RFID protocols standardized by ISO 18000-6, viz. the types A LST and FST, the type B, and the type C. The experimental application demonstrated that this new product can help potential users of RFID systems to select the protocol which best suits the characteristics of the system they intend to implement. Considering the physical characteristics of an environment proposed for implementation of RFID technology, such as the speed of the tickets in relation to the interrogators, the distance, the number of antennas and the number of tickets to be read simultaneously, RFID-Env can help save money by determining if the particular standard or environmental configuration under consideration will attend to the necessities of that particular situation. As we continue to work on RFID-Env, our research will consider other aspects of the physical environment such as the interference produced when the ticketed products themselves scatter RF signals, and attributes of the antennas such as their positioning, focusing, and model. The present RFID-Env version does not permit adjustment’s of the antenna’s direction. Another improvement to be implemented is the option of indicating the amount of information (in bits) stored on the tickets (UID or SUID) in his/her system, so that the simulator knows the exact quantity of bits to be read, and can thus indicate which protocol would be best to cater to the necessities of that particular RFID project.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
An Innovation Applied to the Simulation of RFID Environment
429
References Atkinson, W (2004). Tagged: The risks and rewards of RFID technology. Risk Management, 51(7), n. 7, 12–18. ATMEL (2007). 1.3-kbit UHF R/W IDIC with Anti-collision Function. Available at: <www.atmel.com/literature>. [30 October 2007]. Borrielo, G (2005). RFID: Tagging the world. Communications of the ACM, 48(9), 34–37. Brock, DL (2008). The electronic product code (EPC): A naming scheme for physical objects. Available at:
[05 March 2008]. Cheng, T and L Jin (2007). Analysis and simulation of RFID anti-collision algorithms. In IEEE 9th International Conference on Advanced Communication Technology (ICACT2007), 697–701. Phoenix Park, Korea. February 12–14. Curtin, J, R Kauffman and F Riggins (2007). Making the ‘MOST’ out of RFID technology: A research agenda for the study of the adoption, usage and impact of RFID. Information Technology and Management, 8, 87–110. Friedrich, U (2005). UHF RFID protocols — Reading RFID at the dock door. In VDE RFID Workshop, Darmstadt 2005. Available at: . [30 October 2007]. Hassan, T and S Chatterjee (2006). A taxonomy for RFID. In Proceedings of the 39th Hawaii International Conference on System Sciences, pp. 1–10, IEE CS Press. INTERMEC (2007). IT32 A Gen2 ID Card. Available at: . [30 October 2007]. ISO/IEC 18000-6 (2006a). Information technology automatic identification and data capture techniques — Radio frequency identification for item management air interface, Part 6: Parameters for air interface communications at 860–960 Mhz. ISO/IEC 18000-6 (2006b). Information technology — Radio frequency identification for item management, Part 6: Parameters for air interface communications at 860 MHz to 960 Mhz. Amendment 1 (2006-06-15): Extension with Type C and update of Types A and B. Myung, J and W Lee (2006). Adaptive splitting protocols for RFID tag collision arbitration. In MobiHoc’06, pp. 202–213. Florence, Italy. Ngai, EWT, TCE Cheng, S Au and K Lai (2007). Mobile commerce integrated with RFID technology in a container depot. Decision Support Systems, 43, 62–76. OCDE (2007). Manual de Oslo: diretrizes para coleta e interpreta¸ca˜ o de dados sobre inova¸ca˜ o, 3a . ed. (trad.) FINEP — Financiadora de Estudos e Projetos, MCT/BRASIL. Prado, NRSA, NA Pereira and PR Politano (2006). Dificuldades para a ado¸ca˜ o de RFID nas opera¸co˜ es de uma cadeia de suprimentos. Anais. XXVI ENEGEP — Encontro Nacional de Engenharia de Produ¸ca˜ o, Fortaleza. Rosa, LA (2006). Aplica¸ca˜ o do RFID na cadeia log´ıstica. Monografia. MBA em Tecnologia da Informa¸ca˜ o, Universidade de S˜ao Paulo, Escola Polit´ecnica. Shih, D, P Sun, D Yen and S Huang (2006). Taxonomy and survey of RFID anti-collision protocols. Computer Communications, 29, 2150–2166. Weinstein, R (2005). RFID: A technical overview and its application to the enterprise. IEEE IT Professional, 7, 27–33.
March 15, 2010
430
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch17
M. C. de Azambuja et al.
Ygal, B, L Castro, LA Lefebvre and E Lefebre (2006). Explorando los impactos de la RFID em los procesos de neg´ocios de una cadena de suministro. Journal of Technology Management & Innovation, 1(4).
Biographical Notes Marcelo Azambuja is a Ph.D. student at the Embedded Systems Research Group in the Faculty of Informatics at Rio Grande do Sul (Brazil) Catholic University (PUCRS). His research interests include RFID, anti-collision algorithms for wireless networks and sensor netwoks. He received his M.Sc. in Electrical Engineering from the PUCRS. Contact him at the Faculty of Informatics — PUCRS,Av. Ipiranga, 6681, Porto Alegre/RS, Brazil; [email protected]. Carlos Fernando Jung is a Professor of Production Engineering and Information Systems at Integrated Faculty of Taquara of Rio Grande do Sul (Brazil), and Manager of the Technological Innovation Center of Vale do Paranhana (Brazil). He is a Ph.D. student at the Postgraduate Program of Production Engineering at Federal University of Rio Grande do Sul (UFRGS). He received his M.Sc. in Production Engineering from the Federal University of Santa Maria (UFSM). [email protected] Carla Schwengber ten Caten is a Professor and researcher in the Department of Production Engineering at the Federal University of Rio Grande do Sul (UFRGS), Brazil. He is also vice-coordinator of the Postgraduate Program of Production Engineering at UFRGS. He has a Ph.D. in Mine, Metallurgy and Materials Engineering (PPGEM/UFRGS). [email protected] Fabiano Hessel is a Professor of Computer Science at the Pontifical Catholic University of Rio Grande do Sul (PUCRS), Brazil. His research interests are embedded real-time systems, real-time operating systems and RFID. He received his Ph.D. in Computer Science from Universite Joseph-Fourier, France. He is the head of Embedded Systems Group. He was the General Co-chair of the 18th IEEE/IFIP RSP. He will be the Program Co-chair of the 19th IEEE/IFIP RSP. He is Associate Editor of the ACM Transaction on Embedded Computer Systems’ special issue on Rapid System Prototyping. He has had several publications in prestigious conferences and journals, book chapters and books. Contact him at the Faculty of Informatics — PUCRS, Av. Ipiranga, 6681, Porto Alegre/RS, Brazil; [email protected].
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Chapter 18
CUSTOMERS’ ACCEPTANCE OF NEW SERVICE TECHNOLOGIES: THE CASE OF RFID ALESSANDRA VECCHI∗ and LOUIS BRENNAN School of Business, Trinity College, College Green, Dublin 2, Ireland ∗ [email protected] ARISTEIDIS THEOTOKIS Department of Management Science and Technology, Athens University of Economics and Business (AUEB), Patision Street, Athens, Greece
Cultural variations across countries are considered a major factor affecting customers’readiness to adopt, use, and evaluate technology. Relevant contributions from marketing studies, computer science, and international business are integrated into the literature of crosscultural management and technology acceptance, and a conceptual model is developed. Drawing on a broader research project on radio frequency identification (RFID) aimed at supporting intelligent business networking and innovative customer services, the development of the framework is informed by the authors’work in the preparation of an RFID-based application at several established grocery retailers for short-life products in Ireland and in Greece. From the findings of our exploratory study, it emerges that low uncertainty avoidance, low institutional collectivism, high in-group collectivism, high gender egalitarianism, and low humane orientation are conducive to greater customers’ acceptance of new service technologies. Managerial implications and directions for future research are discussed. Keywords: RFID; consumer’s acceptance; GLOBE; culture.
1. Introduction Although globalization is fostering the use of ubiquitous technology, little attention has been devoted to customers’ readiness to adopt, use, or evaluate of technology from a cross-cultural perspective. In marketing studies for instance, a wide range of models have been developed to predict users’ acceptance of technology. The “theory of reasoned action” (TRA), for example, employs four constructs to explain technology use or adoption behavior — behavioral attitudes, subjective norms, the intention to use, and actual use (Shih and Fang, 2004). By contrast, the “technology acceptance model” (TAM) places more emphasis on the perceived usefulness of the technology and the perceived ease of use to explain usage behavior (Bagozzi 431
March 15, 2010
432
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al.
et al., 1992). Although all of these models implicitly acknowledge that technology adoption is likely to be largely a matter of “cultural affinity,” (Phillips et al., 1994) a more explicit articulation of this issue is still missing. This chapter seeks to address this gap in the literature by introducing a conceptual framework aimed at assessing radio frequency identification (RFID) adoption among customers from a cross-cultural perspective. In particular, drawing on a broader research project on RFID aimed at supporting intelligent business networking and innovative customer services, the development of the framework is informed by the authors’ work in the preparation of an RFID-based application at several established grocery retailers for short-life products in Ireland and in Greece. This chapter endorses the view that customers’ technology readiness (TR) and service quality assessment (SQA) should be jointly taken into account to accurately predict the perception and behavior of customers from a cross-cultural perspective. On the one hand, customers’ TR framework (Parasuraman, 2000) allows the classification of customers into explorers, pioneers, skeptics, paranoids, and laggards and provides valuable insights from a cross-cultural perspective (Parasuraman and Colby, 2001). Additionally, it also allows firms to identify innovative customers who are likely to be most helpful during the new service technologies where the timing of adoption is a very challenging task for firms (Matthing et al., 2006). On the other hand, as customers’ evaluation of interactions with service-based technology appears to be different from that in purely human service settings (Parasuraman et al., 2005), service quality must be evaluated within the context of technological interaction from a cross-cultural perspective. More precisely, SQA should focus on the extent to which the resulting customer perception and behavior can affect the adoption of the technology (Lin and Hsieh, 2006). This chapter offers a framework that provides a better understanding of TR and SQA from a cross-cultural perspective, providing valuable insights on how to improve the “ergonomic match” (Ahasan and Imbeau, 2003) between users and service offering. This framework is evaluated in the context of an RFIDenabled service. In demonstrating the application of our conceptual framework to the assessment of customers’ readiness to adopt, willingness to use, and evaluation of RFID technology and any consequent influence on customers’ behavior, this study suggests that this cross-cultural framework may have broader applicability in the assessment of customers’ responses to other discontinuous and revolutionary technologies. 2. A Marketing Perspective In marketing studies, a wide range of models has been developed to predict users’ acceptance of technology. Understanding why individuals choose to accept or reject new technologies is proving to be one of the most challenging research questions
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Customers’ Acceptance of New Service Technologies
433
in this field (Pare and Elam, 1995). The TRA, for example, employs four constructs to explain technology use or adoption behavior — behavioral attitudes, subjective norms, the intention to use, and actual use (Shih and Fang, 2004). Significant advances in the research of attitude were made by Fishbein and Ajzen in 1975. In an extension of Fishbein’s earlier learning theory, Fishbein (1980) developed a theory of the relationship between attitude and behavior. The TRA was developed to explain how a customer’s attitudinal beliefs and normative beliefs lead to certain perceptions and behaviors (Fishbein, 1980). The theory asserts that attitudes toward acceptance and subjective norms are the antecedents of the adoption of technology. The two antecedents (attitude and subjective norms) influence the customer’s perception and behavior additively, although a conceptual argument was developed earlier leading to interaction as well as direct cumulative effects (Ryan and Bonfield, 1975). Ryan and Bonfield (1975) for instance report that operational measures of the constructs have been shown to have separate effects on the adoption of the technology. If the cumulative effect of attitudes and subjective norms can be supported, their analysis has implications for marketing strategy. It can ascertain whether the intent to use and actual use are primarily governed by attitudinal or social influence. Lutz (1991) offered two important propositions underlying the TRA. To predict acceptance behavior, it is first necessary to measure a person’s attitude towards enacting that behavior, and not just the general attitude towards the object that the adoption behavior is directed at. For example, while a person’s attitude towards a mobile phone may be favorable he may have never need a mobile phone. Second, the TRA includes another determinant of overt behavior: the subjective norms which measure the social influences on a person’s behavior (i.e., family members’ expectations, societal expectations, and cultural expectations). There may be some situations where behavior is simply not under the attitudinal control of individuals; rather, the expectations of relevant others (e.g. national culture) may be a major factor in ultimate behavioral performances. The TRA is different from traditional attitude theories in that it introduces normative influences into the overall model and a causal relationship between the two antecedents and intention to use technology. In a different fashion, the TAM places more emphasis on the perceived usefulness of the technology and the perceived ease of use to explain usage behavior (Bagozzi et al., 1992). In an attempt to better understand use acceptance, Davis (1989) developed the TAM, which is considered the most comprehensive attempt to articulate the core psychological aspects associated with technology use (Henderson and Dirett, 2003). Based on the generic model of the TRA (Ajzen and Fishbein, 1980), the model has provided a robust and valuable framework when considering both technology acceptance and uptake (Mathieson, 1991). In short, Davis et al. (1989) postulated that users’ attitudes toward using a technology consisted of a cognitive appraisal of the design features and an affective attitudinal response to the technology. In turn, this attitude influences the actual use, or acceptance of the
March 15, 2010
434
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al.
technology. The two major design features outlined by these researchers included the perceived usefulness of the technology (operating as an extrinsic motivator) and its perceived ease of use (operating as an intrinsic motivator) (Davis, 1993). Perceived usefulness was defined as the “degree to which an individual believes that using a particular system would enhance his or her job performance” (Davis, 1993). Perceived ease of use was defined as the “degree to which an individual believes that using a particular system would be free of physical and mental effort” (Davis, 1993). It was argued that these two features formed the users’ attitude toward using the technology, which in turn influenced actual usage. Thus, the more positive the perceived ease of use and perceived usefulness of the technology, the higher the probability of the technology’s actual use. Furthermore, Davis (1989) also postulated that perceived ease of use had a direct impact upon perceived usefulness, but not vice versa. Straub et al. (1997) compared the TAM model across three different countries: Japan, Switzerland, and the United States. The study was conducted by administering the same instrument to employees of three different airlines, all of whom had access to the same information technology, in this case, e-mail. The results indicate that TAM holds for both the United States and Switzerland, but not for Japan, suggesting that the model may not predict technology use across all national cultures. Furthermore, although both TRA and TAM implicitly acknowledge that technology acceptance is likely to largely be a matter of “cultural affinity” (Phillips et al., 1994), a more explicit articulation of this issue is still missing. Cultural affinity is defined as “the degree to which rules, customs, and communications of foreign culture resemble the usual way of doing business in the home culture” (Phillips et al., 1994). In particular, differences were found in the relationship between cultural affinity and ease of adoption, based on low uncertainty avoidance (vs. high uncertainty avoidance) as defined by Hofstede (1980). In low uncertainty avoidance countries, the influence of culture was stronger and more positive than in high uncertainty avoidance countries. These findings suggest that when established demand for a product exists in a low uncertainty avoidance country and new technology is both necessary and available, priorities are formed in an effort to justify the adoption decision. Working backward through the model, low uncertainty avoidance facilitates the adoption process by ultimately encouraging the behavioral intention to adopt, which is the result of positive attitudes toward the new technology. According to the model, these positive attitudes are jointly determined by the ease of adoption and the perceived utility. In an effort to justify the adoption, it is likely that commonalities or other similar reasons to adopt are explored. Cultural affinity, for instance, reduces language and communication barriers and therefore increases the ease of adoption. In sum, when demand exists, low uncertainty avoidance countries seek reasons that lead to adoption behavior. As revealed in this study, cultural affinity positively influences the ease of adoption, and subsequently, technology adoption behavior (Phillips et al., 1994).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Customers’ Acceptance of New Service Technologies
435
3. Technology Readiness and SQA The roles of technology in customer-company interactions along with the number of technology-based products and services have grown rapidly in recent years (Parasuraman, 2000). The development of new technologies has revolutionized the service landscape, with companies increasingly relying on technology to improve service operations, increase service efficiency, and provide functional benefits for customers (Lin and Hsieh, 2005). Furthermore, companies’ use of new self-service technologies (SSTs) to serve customers is growing rapidly. As technological innovations continue to be a critical component of customer-firm interactions, SSTs such as technological interfaces that enable customers to produce a service independent of direct service-employee involvement (Meuter et al., 2000) have changed the way customers interact with firms to create service outcomes. As Meuter et al. (2000) states, customers may avoid using SSTs if they are not comfortable with and/or as ready to use the technology, even if they have witnessed its benefits. There is also evidence of increasing customer frustration in dealing with technology-based services (Parasuraman, 2000). This indicates that customers’ acceptance of technology will vary according to characteristics of individuals (Lin and Hsieh, 2005). Parasuraman (2000) suggests that TR should also be taken into consideration when technologies are being developed, so as to predict the behavior of customers more accurately. Therefore, one of the major issues of the injection of technology into service businesses is the customers’ readiness and willingness to use technology-based services as well as the influence of such technology on service results. This chapter endorses the view that customers’ TR and SQA should be jointly taken into account to accurately predict the perception and behavior of customers from a cross-cultural perspective. On the one hand, customers’ TR framework (Parasuraman, 2000) allows the classification of customers into explorers, pioneers, skeptics, paranoids, and laggards, providing valuable insights from a cross-cultural perspective (Parasuraman and Colby, 2001; Tiskriktis, 2004). TR as defined by Parasuraman is “people’s propensity to embrace and use new technologies for accomplishing goals in home life and at work” (Parasuraman, 2000). According to the author, TR can be classified into four distinct components: optimism, innovativeness, discomfort, and insecurity. Optimism is defined as “a positive view of technology and the belief that it offers people increased control, flexibility, and efficiency in their lives” (Parasuraman, 2000). Innovativeness is “the tendency to be a technology pioneer and thought leader” (Parasuraman, 2000). It measures the extent to which people believe that they are the forefront of trying out new technology-based products or services and are considered by others as an opinion leader or a trendsetter in technology-related issues. Discomfort is defined as “the perceived lack of control over technology and a feeling of being overwhelmed by it” (Parasuraman, 2000). It represents the extent to which people have a general paranoia about technology-based products or services believing that they tend to be exclusionary rather than inclusive of all kinds of people. Insecurity is the “distrust of
March 15, 2010
436
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al.
technology and skepticism about its ability to work properly (Parasuraman, 2000). According to Parasuraman (2007), optimism and innovativeness are the positive drivers of TR, encouraging customers to use technology-based services or products and hold a positive attitude toward technology, while discomfort and insecurity are negative drivers, making customers reluctant to use the technology. In particular, based on US data, Parasuraman and Colby (2001) provide a taxonomy of technology customers. More precisely, they describe five types of technology customers: explorers, pioneers, skeptics, paranoids, and laggards. On the basis of their TR, they rank technology customers as follows: “The first people to adopt are the explorers, who are highly motivated and fearless. The next to adopt are the pioneers, who desire the benefits of new technology but are more practical about the difficulties and dangers. The next wave consists of two groups: skeptics, who need to be convinced of the benefits of the new technologies, and paranoids, who are convinced of the fruits but usually concerned about the risks. The last group, laggards, may never adopt unless they are forced to do so” (Parasuraman and Colby, 2001). According to this view, successive waves of new technology users will have distinct needs and service requirements. Thus, the strategy to both attract and retain them must change accordingly. The taxonomy is useful because it allows firms to identify innovative customers, namely, explorers and pioneers. They are likely to be the most helpful during new service technology developments, where the timing of adoption is a very challenging task for firms (Matthing et al., 2006). However, it presents two major caveats. First, the taxonomy was replicated with a UK sample and the replication partially supports the taxonomy of five types of customers based on their technology beliefs. In particular, Tiskriktis (2004) found support for four of the five original clusters (explorers, pioneers, skeptics, and laggards) but found no evidence of the existence of a fifth group named paranoids. Another finding from this study is that there are varying concentrations of strategic groups in different countries. The study also shows that the four clusters differ in terms of demographics (such as gender, age, and education) in the current and future use of technology-based services. On the other hand, as customers’ evaluation of interactions with service-based technology appears to be different from that in purely humane service settings (Parasuraman et al., 2005), SQA must be evaluated within the context of technological interaction from a cross-cultural perspective. When purchasing products, customers employ many tangible clues to judge quality: style, hardness, color, label, feel, package, fit, and so on. When using services, fewer tangible clues exist. In most cases, tangible evidence is limited to the provider’s physical facilities, equipment, and personnel. In the case of technology-based services and products with an absence of tangible evidence on which to evaluate quality, customers must depend on other clues (Parasuraman et al., 1999). More precisely, SQA should focus on the extent to which the resulting customer perception and behavior can affect the adoption of the technology (Lin and Hsieh, 2006). Based on both deductive and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Customers’ Acceptance of New Service Technologies
437
inductive scale development approaches, Lin and Hsieh (2006) conceptualized, constructed, refined, and tested a multiple-item scale that examined key factors influencing technology service quality. They discussed the theoretical background and previous research to derive technology-based service quality dimensions deductively, while undergoing a series of qualitative studies and discussing their findings in relation to existing theories. Based on those findings, an initial pool of scale items were grouped in terms of functionality, enjoyment, privacy/security, assurance, convenience, design, customization, and satisfaction. Functionality refers to the extent to which customers find the technology-based service clear, fast, and efficient. It involves the manner in which the service is delivered (Gronroos, 1982). Enjoyment relates to the extent to which customers perceive the technology-based service to be interesting and useful, and feel good about using that technology. (Gronroos, 1982). Security/privacy refers to the extent to which customers feel safe using the technology-based service and feel that their privacy is not breached (Parasuraman et al., 2005). Assurance involves the customers’ perception that the technology-based service is well-known and has a good reputation. Design relates to the extent to which the technology-based service is aesthetically appealing and relies on cutting-edge technology (Lin and Hsieh, 2006). While convenience refers to the extent to which the technology-based product or service is convenient and easy for customers, customization relates to the degree to which the technologybased service addresses the customers’ specific needs and has their best interest at heart. Finally, satisfaction refers to the extent to which the technology-based product or service exceeds customers’ expectations or its closeness to their ideal service or product (Parasuraman et al., 2005). 4. A Cross-Cultural Framework Based on GLOBE Although TRA, TAM, TRI, and SQA are all helpful models to understand technology acceptance, they offer little towards our theoretical understanding of the constraining influences of the cultural dimensions involved in the adoption of technology. If we are to better understand customers’ readiness to adopt, use and evaluate technology from a cross-cultural perspective, we need a conceptual framework that incorporates the role of national culture in to technology acceptance. Figure 1 depicts the relative importance of national culture along side the theoretical factors illustrated so far. National culture has been defined in many ways. Hofstede (1980) defines culture as the collective programming of the mind that distinguishes one group from another. Parsons and Shils (1951) define it as the shared characteristic of a high-level social system. It can also be defined as the shared values of a particular group of people (Erez and Earley, 1993). According to Lachman (1983) and Triandis (1995), national culture reflects the core values and beliefs of individuals formed during childhood that are reinforced throughout life. Hofstede (1980) contends that national culture is an important issue in management theory and indeed,
March 15, 2010
14:45
438
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al. Adoption of technology
Attitudinal belief
Attitude
Normative belief
Subjective norm
Customer perception and behavior
Perceived usefulness
National culture
Perceived ease of use Optimism Innovativeness Ergonomic match Discomfort Insecurity Explorers Paranoids
Technology readiness
SQA
Skeptics Laggards
Functionality
Security/Privacy
Design
Enjoyment
Assurance
Convenience
Customization
Satisfaction
Figure 1. Technology acceptance — a cross-cultural framework.
national culture has been identified as an important variable in many global studies. In the field of International Business for instance, Kedia and Bhagat (1988) suggest that variations in culture at both the national and organizational levels are considered two major factors in the transfer of technology. Janczewski (1992) suggests that cultural differences found in African countries require that information systems be modified to suit the needs of host organizations in that country. Shore and Venkatachalam (1995) establish the role of culture in systems analysis and design. Similarly, GLOBE’s intent is to explore the cultural values and practices in a wide variety of countries and to identify their impact on organizational practices and leadership attributes (House et al., 2004). To this end, House et al. (2004) examine national cultures in terms of nine dimensions: (i) Uncertainty avoidance is defined as the extent to which members of a society strive to avoid uncertainty by reliance on social norms, rituals, and bureaucratic practices to mitigate the unpredictability of future events. (ii) Power distance is defined as the degree to which members of society expect and agree that power should be equally shared.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Customers’ Acceptance of New Service Technologies
439
(iii) Institutional collectivism reflects the degree to which societal practices encourage and reward the collective distribution of resources and collective action. (iv) In-group collectivism reflects the degree to which individuals express pride, loyalty, and cohesiveness in their organizations. (v) Gender egalitarianism is the extent to which a society minimizes gender role differences and gender discrimination. (vi) Assertiveness is the degree to which individuals in societies are assertive, confrontational, and aggressive in their social relationships. (vii) Future orientation is the degree to which individuals in societies engage in future-oriented behaviors such as planning, investing in the future, and delaying gratification. (viii) Performance orientation refers to the extent to which a society encourages and rewards group members for performance improvement and excellence. (ix) Humane orientation is the degree to which individuals in organizations or societies encourage and reward individuals for being fair, altruistic, friendly, generous, caring, and kind to others. On the basis of the nine cultural dimensions listed above, the GLOBE study identifies 10 societal clusters: South Asia, Anglo, Arab, Germanic Europe, Latin Europe, Eastern Europe, Confucian Asia, Latin America, Sub-Sahara Africa, and Nordic Europe. As we endorse the idea that different dimensions of national culture are likely to bear important implications for the broader issue of technology acceptance in a global context, we therefore examine the following propositions around the main cultural dimensions. Uncertainty avoidance is likely to affect technology acceptance through risk avoidance. Proposition 1A: Customers in low uncertainty avoidance countries are likely to perceive greater ease of use and hence greater usefulness. This should lead to a greater willingness to use and a more positive evaluation of the technology and of the service offered, thus resulting in overall higher technology acceptance. Proposition 1B: By contrast, customers in countries characterized by high uncertainty avoidance are likely to perceive less ease of use, and hence, less usefulness. This should lead to a decreased willingness to use and a more negative evaluation of the technology and of the quality of the service offered, thus resulting in overall lower technology acceptance. Power distance is likely to affect technology acceptance, as technology can alter the existing social structure that legitimizes authority. Cultures with a high degree of power distance are expected to be less open to changes in social structure for fear that this would delegitimize authority. The same concern is not present in cultures characterized by low power distance. Proposition 2A: Customers in low power distance countries are likely to show a greater willingness to use and evaluate the
March 15, 2010
440
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al.
technology and service offered more positively, resulting in overall higher technology acceptance. Proposition 2B: By contrast, customers in high power distance countries are likely to show less willingness to use and a more negative evaluation of the technology and of the service offered, resulting in overall lower technology acceptance. Institutional collectivism is likely to affect technology acceptance. People in individualistic countries are inclined to make their own choices, while people in collective countries are more willing to conform to the norms of the group. Technology adoption can be contrary to the prevailing group norm. Proposition 3A: Customers in countries characterized by low institutional collectivism are likely to perceive greater ease of use and hence greater usefulness. This should lead to a greater willingness to use and a more positive evaluation of the technology and of the service offered, resulting in an overall higher technology acceptance. Proposition 3B: By contrast, customers in countries characterized by high institutional collectivism are likely to perceive less ease of use, and hence, less usefulness. This should lead to less willingness to use and a more negative evaluation of the technology and of the quality of the service offered, thus resulting in overall lower technology acceptance. In-group collectivism is likely to affect technology acceptance. Social acceptance of the technology can play a major role in determining its adoption and privacy can constitute a major issue, especially for more individualistic countries. Proposition 4A: Customers in countries characterized by high in-group collectivism are likely to perceive greater ease of use and hence, greater usefulness. This should lead to a greater willingness to use and a more positive evaluation of the technology and of the service offered, thus resulting in overall higher technology acceptance. Proposition 4B: By contrast, customers in countries characterized by low in-group collectivism are likely to perceive less ease of use and hence less usefulness. This should lead to less willingness to use and a more negative evaluation of the technology and quality of the service offered, thus resulting in overall lower technology acceptance. Gender egalitarianism is likely to affect technology acceptance as technology can alter the existing social structure that legitimizes gender discrimination. Cultures with a low degree of gender egalitarianism are expected to be less open to changes in their social structure for fear that this would delegitimize gender discrimination. The same concern is not present in cultures characterized by high gender egalitarianism. Proposition 5A: Customers in countries characterized by high gender egalitarianism are likely to show greater willingness to use and evaluate the technology and service offered more positively, thus resulting in overall higher technology acceptance. Proposition 5B: By contrast, customers in countries characterized by low gender egalitarianism are likely to show less willingness to use and a more negative evaluation of the technology and service offered, thus resulting in overall lower technology acceptance.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Customers’ Acceptance of New Service Technologies
441
Assertiveness is likely to affect technology acceptance, as technology can alter the ways in which individuals in societies are assertive, confrontational, and aggressive in their social relationships. Proposition 6A: Customers in countries characterized by high assertiveness are likely to show a greater willingness to use and a more positive evaluation of the technology and service offered, thus resulting in an overall higher technology acceptance. Proposition 6B: By contrast, customers in countries characterized by low assertiveness are likely to show less willingness to use and a more negative evaluation of the technology and service offered, thus resulting in overall lower technology acceptance. Future orientation is likely to affect technology acceptance, as technological advances are often associated with the future. Proposition 7A: Customers in countries characterized by high future orientation are likely to show a greater willingness to use and evaluate the technology and service offered more positively, thus resulting in overall higher technology acceptance. Proposition 7B: By contrast, customers in countries characterized by low future orientation are likely to show a decreased willingness to use and a more negative evaluation of the technology and service offered, thus resulting in overall lower technology acceptance. Performance orientation is likely to affect technology acceptance, as technology can have a major impact on the ways in which people pursue performance improvement and excellence. Proposition 8A: Customers in countries characterized by high performance orientation are likely to show greater willingness to use and evaluate the technology and service offered more positively, thus resulting in overall higher technology acceptance. Proposition 8B: By contrast, customers in countries characterized by low performance orientation are likely to show less willingness to use and a more negative evaluation of the technology and service offered, thus resulting in overall lower technology acceptance. Humane orientation is likely to affect technology acceptance, as technology can alter the ways in which individuals can be fair, altruistic, friendly, generous, caring, and kind to others. Proposition 9A: Customers in countries characterized by low humane orientation are likely to show greater willingness to use and evaluate the technology and service offered more positively, thus resulting in overall higher technology acceptance. Proposition 9B: By contrast, customers in countries characterized by high humane orientation are likely to show less willingness to use and a more negative evaluation of the technology and service offered, thus resulting in overall lower technology acceptance. 5. Assessing Acceptance of Dynamic Pricing in Ireland and in Greece Drawing on a broader research project on RFID aimed at supporting intelligent business networking and innovative customer services (Vecchi et al., 2007), the development of the cross-cultural framework is informed by the authors’ work in
March 15, 2010
442
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al.
the preparation of an RFID pilot at several established grocery retailers for shortlife products located both in Ireland and in Greece. In particular, the comparison between Ireland and Greece is particularly meaningful as both countries display different GLOBE scores in relation to the nine cultural dimensions, as illustrated in Table 1. The trial was planned on the basis of a vision of an information-enriched supply chain, where a dynamic pricing strategy could be implemented for short-life products. The discriminating pricing strategy set for the trial entails dynamically pricing the short-life products based on their disposal cost and their sell-by date. It is worth noting that the Irish grocery retailer had already engaged in this business practice by relying on a barcode-based application that inevitably required manual re-pricing. The grocery retailers decided to proceed with an investigation of RFID as a means of automating the dynamic pricing strategy and conducted a trial to assess the potential benefits of the technology (Vecchi and Brennan, 2008). Subsequently three practical goals were set for the trial: (a) Reducing labor associated with stock handling (counting and rotation monitoring in store) (b) Reducing labor associated with re-pricing (c) Reducing waste in the supply chain The product selected for the implementation of the dynamic pricing service was pre-packaged minced beef. The study was an empirical study executed around its implementation, and was based on a sample of supermarket shoppers who are the main users of retail services. To study customer perceptions of RFID-enabled services, we collected data by means of a combination of a scenario and survey methodology involving face-to-face interviews with customers (Dabholkar and Bagozzi, 2002). A questionnaire was developed to include questions on both attitudes and the intention to use, as well as on individual characteristics such as TR (Parasuraman, 2000) and technology anxiety (Meuter et al., 2001). We presented to customers a scenario of an RFID-enabled retail service. The scenario featured an application that offers the opportunity to offered customers be dynamically informed about the product’s price and other features. After customers familiarized themselves with the scenario, they were asked to evaluate it in terms of usefulness and ease of use. Then, they were asked about their attitudes and intention to use. Finally, the second part of the survey addressed individual characteristics, such as TR and technology anxiety for each respondent. To assess the level of technology acceptance of the customers in Greece and Ireland, a questionnaire was developed that incorporated questions on both TR and SQA, as previously validated by numerous empirical studies (Lin and Hsieh, 2006; Parasuraman, 2000). The questionnaire was administered to 575 customers across both countries. Data was collected in supermarket stores in Greece and Ireland over a 2-week period. The data was collected during different time slots to ensure a representative
March 15, 2010 14:45
Collectivism institutional
Collectivism Gender Assertiveness group egalitarianism
Future Performance orientation orientation
Humane orientation
3.39 4.30
5.4 5.15
3.25 4.63
5.27 5.14
3.48 3.21
2.79 3.92
3.40 3.98
3.2 4.36
3.34 4.96
4.16
5.17
4.25
5.14
3.37
3.88
3.85
4.10
4.09
SPI-b778
Greece Ireland GLOBE Average
Uncertainty Power avoidance distance
WSPC/Trim Size: 9.75in x 6.5in
Country
GLOBE scores for Ireland and Greece.
b778-ch18
Customers’ Acceptance of New Service Technologies
Table 1.
443
March 15, 2010
444
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al.
cross-section of shoppers. The sample was drawn from customers of major retail stores in Greece and Ireland. Overall, 69% of the respondents were women. In terms of age group representation, the sample was balanced. Almost half of the respondents were in the 25–44 age group (51%) and approximately one out of three was in the 44+ age group (24% in the 44–55 age group and 10% in the 55+ group). Finally, 15% of the sample belonged to the 18–25 age group. Customers’ attitudes toward the scenario were measured using 5-point semantic-differential scales. A two-item scale with the endpoints like–dislike, and likely–unlikely-to-use was used to measure attitude toward service usage. In particular, questions related to the appreciation of the dynamic pricing service itself on a 5-item scale both in relation to the extent to which customers liked the service (customer perception) and would use it in the future (customer adoption of the technology). Then, customers were asked to evaluate three different modes of technology that were planned for use in the trial to convey the dynamic pricing information to customers. Customers were given the option to read the dynamic pricing on an electronic shelf-tag, on an LCD screen or by using a personal digital assistant (PDA) device. Increasing amounts of information are associated with the three technologies: the electronic shelf-tag provides only the basic information associated with the new price and sell-by date; the LCD screen provides new price, sell-by date along with the production date; the PDA provides additional information such as general information about the quality of the product (production location, distribution, storage information), ideas for cooking, and other information. Besides providing differential amounts of information, these three modes of technology were selected as they imply substantially different degrees of users’ involvement and involve different levels of maturity of the technology. In relation to these three alternative implementations of the dynamic pricing scenario, customers were asked to rate on a 5-item scale the extent to which they liked the service (customer perception) and would use it in the future (customer adoption of the technology). They were also asked the extent to which the service is perceived to be complex, useful, efficient, and easy to use. Customers were also prompted to answer some questions aimed at assessing their TR. More precisely, customers were asked to rate on a 5-item scale several statements exploring their attitudes in terms of optimism (e.g., “I am confident I can learn technology-related skills”), innovativeness (e.g., “I am able to keep up with important technological updates”), discomfort (e.g., “I have difficulty understanding most technological matters), and insecurity (e.g., “I have avoided using technology because I am unfamiliar with it”). Customers were also prompted to discuss the extent to which they liked to spend time using the technology and to assess three different privacy concerns (“Bothered by personal information collection,” “Concerned with its unauthorized use” and “Concerned with its improper use”). Then, customers were presented with the option that the service would rely on RFID and they were asked to assess again the extent to which they still liked the service (customer perception) and would make use of it in the future (customer
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Customers’ Acceptance of New Service Technologies
445
adoption of the technology). Finally, some data on demographic characteristics such as age and gender were collected. The data collected through the questionnaire in Ireland and in Greece are used to assess the validity of the propositions previously described in relation to the nine GLOBE cultural dimensions. 6. Main Findings Table 2 provides data in relation to dynamic pricing acceptance in Ireland and Greece. The data indicate that customers in Greece show greater acceptance of dynamic pricing than in Ireland. Although greater acceptance obtains across all the three scenarios, the greater acceptance of dynamic pricing results with the use of an LCD screen. Greek customers tend to perceive greater ease of use when compared to their Irish counterparts and hence greater usefulness. This leads to a greater willingness to use and a more positive evaluation of the technology and of the service offered, resulting in overall higher technology acceptance. In particular, the overall attitude toward the acceptance of the technology may be driven by innovativeness. Despite showing significantly lower optimism in relation to the acceptance of the technology than their Irish counterparts, Greek customers show significantly higher innovativeness. By contrast, customers in Ireland tend to have a significantly lower perception of ease of use and usefulness associated with the service. This leads to a decreased willingness to use and a more negative evaluation of the technology and service offered, resulting in overall lower technology acceptance. The overall attitude of Irish consumers may be driven by insecurity. Despite showing significantly higher optimism in relation to their acceptance of the technology compared to their Greek counterparts, Irish customers demonstrate significantly higher insecurity. Surprisingly, despite Irish customers revealing an overall lower appreciation of dynamic pricing in relation to all three scenarios compared to their Greek counterparts, the opposite arises in relation to the use of RFID, where Irish customers show a more positive evaluation of the service offered, as well as a greater willingness to use the dynamic pricing service. This apparent contradiction may be explained by the fact that customers in Greece are significantly more concerned with the collection of personal information, while in Ireland customers seem more concerned with its unauthorized use. We can gain a fuller understanding of the customers’ acceptance of new service technologies in Greece and in Ireland by looking at the customers’ response across the main nine cultural dimensions. To this end, Table 3 presents the results of customers’ acceptance of the dynamic pricing service for the five GLOBE cultural dimensions, where there is a difference in magnitude between Ireland and Greece. In particular, it is worth noting that the framework is particularly useful for explaining Scenario 2, where a greater perceived ease of use and usefulness on the part of the Greek respondents leads to a greater willingness to use and a more positive evaluation of the technology. By contrast, the framework offers very limited insights into the implementation of Scenario 1 and Scenario 3.
March 15, 2010
446
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al.
Table 2.
Dynamic Pricing Acceptance in Greece and Ireland.
Dynamic pricing I like this service I would use this service Electronic shelf-tag I like this service I would use this service This service is complex The service is useful This service makes me more efficient I find this service easy to use LCD screen I like this service I would use this service This service is complex The service is useful This service makes me more efficient I find this service easy to use PDA device I like this service I would use this service This service is complex The service is useful This service makes me more efficient I find this service easy to use General questions Avoid technology because unfamiliar Difficulty understanding technology Confident I can learn Keep up with advances I like to spend as little time as possible Bothered by personal information collection Concerned with its unauthorized use Concerned with its improper use RFID I like this service I would use this service
Greece
Ireland
Tot.
4.04 3.98
4.27
4.20 3.98
4.14 3.96 1.83 4.13 3.98 4.29
3.18 3.17 2.51 3.38 3.56 3.19
3.44 3.39 2.32 3.59 3.68 3.49
b
4.42 4.35 2.17 4.18 4.04 4.28
3.51 3.50 2.48 3.59 4.03 2.83
3.83 3.80 2.37 3.80 4.04 3.33
c
4.31 4.12 2.45 4.10 4.06 3.98
3.31 3.27 3.13 3.31 3.43 3.43
3.57 3.50 2.95 3.52 3.60 3.57
1.75 1.85 4.19 4.17 4.34 4.10 4.30 4.64
2.16 2.03 4.34 4.02 3.63 3.60 4.66 4.59
2.04 1.98 4.29 4.07 3.85 3.75 4.55 4.61
b
3.26 3.34
3.61 3.68
3.50 3.58
b
Significance: a p < 0.05; b p < 0.01; c p < 0.001 t-test.
Sig.
b
c c b c
c b a
c b c b c
b
March 15, 2010
(1: Strongly disagree/5: Strongly agree)
Group collectivism
Gender egalitarianism
Humane orientation
L: GR
L: GR
L: IRL
L: IRL
H: GR
L: GR
H: IRL
H: IRL
H: GR
b
b
b
b
b
b
b
b
c
c
c
c
c
c
c
c
c
c
c
c
c
c
c
b
b
b
b
b
c
c
c
c
c
(Continued)
b778-ch18
b
SPI-b778
b
H: IRL
WSPC/Trim Size: 9.75in x 6.5in
Institutional collectivism
Customers’ Acceptance of New Service Technologies
Dynamic pricing I like this service I would use this service Electronic shelf-tag I like this service I would use this service This service is complex The service is useful This service makes me more efficient I find this service easy to use LCD screen I like this service I would use this service This service is complex The service is useful This service makes me more efficient I find this service easy to use
Uncertainty avoidance
14:45
Table 3. Customers’ Response According to Uncertainty Avoidance, Institutional Collectivism, In-group Collectivism, Gender Egalitarianism and Humane Orientation.
447
Group collectivism
Gender egalitarianism
Humane orientation
L: GR
L: GR
L: IRL
L: IRL
L: GR
H: GR
H: GR
c
c
c
c
c
b
b
b
b
b
H: IRL
a
a
a
a
a
b
b
a
a
b
c
c
c
c b
c
b
b
c
c
c
c
c
b
b
b
b
b
c
c
c
b
b
b
b
b
b
b
b
b
b
b778-ch18
Significance: a p < 0.05; b p < 0.01; c p < 0.001 t-test.
H: IRL
SPI-b778
PDA device I like this service I would use this service This service is complex This service is useful This service makes me more efficient I find this service easy to use General questions Avoid technology because unfamiliar Difficulty understanding technology Confident I can learn Keep up with advances I like to spend as little time as possible Bothered by personal information collection Concerned with its unauthorized use Concerned with its improper use RFID I like this service I would use this service
H: IRL
WSPC/Trim Size: 9.75in x 6.5in
Institutional collectivism
14:45
Uncertainty avoidance
March 15, 2010
(Continued)
A. Vecchi et al.
(1: Strongly disagree/5: Strongly agree)
448
Table 3.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Customers’ Acceptance of New Service Technologies
449
Uncertainty avoidance affects technology acceptance. Our findings illustrate the extent to which customers in low uncertainty avoidance countries tend to perceive greater ease of use and hence, greater usefulness. This leads to a greater willingness to use and a more positive evaluation of the technology and service offered, resulting in overall higher technology acceptance. This is particularly the case in Scenario 2, which relies on the adoption of an LCD screen where customers in Greece with low uncertainty avoidance tend to perceive greater ease of use compared to their counterparts in Ireland with high uncertainty avoidance. They also perceive significantly greater usefulness. Thus, the data support Propositions 1A and 1B. According to Proposition 1A, customers in low uncertainty avoidance countries are likely to perceive greater ease of use and hence greater usefulness. This should lead to greater willingness to use and a more positive evaluation of the technology and of the service offered, resulting in overall higher technology acceptance. By contrast, according to Proposition 1B, customers in countries characterized by high uncertainty avoidance are likely to perceive less ease of use and hence less usefulness. This should lead to less willingness to use and a more negative evaluation of the technology and of the quality of the service offered, thus resulting in overall lower technology acceptance. Institutional collectivism affects technology acceptance. Our findings illustrate the extent to which customers in countries characterized by low institutional collectivism tend to perceive greater ease of use and hence greater usefulness. This leads to a greater willingness to use and a more positive evaluation of the technology and of the service offered, resulting in overall higher technology acceptance. This is particularly the case in Scenario 2, which relies on the adoption of an LCD screen, where customers in Greece with low institutional collectivism tend to perceive greater ease of use compared to their counterparts in Ireland with high institutional collectivism. They also perceive significantly greater usefulness. Thus, the data support Propositions 3A and 3B. According to Proposition 3A, customers in countries characterized by low institutional collectivism are likely to perceive greater ease of use and hence, greater usefulness. This should lead to a greater willingness to use and a more positive evaluation of the technology and of the service offered, resulting in overall higher technology acceptance. By contrast, according to Proposition 3B, customers in countries characterized by high institutional collectivism are likely to perceive less ease of use and hence less usefulness. This should lead to less willingness to use and a more negative evaluation of the technology and of the quality of the service offered, resulting in overall lower technology acceptance. In-group collectivism affects technology acceptance. Our findings illustrate the extent to which customers in countries characterized by high in-group collectivism tend to perceive greater ease of use and hence, greater usefulness. This leads to a greater willingness to use and a more positive evaluation of the technology and of the service offered, resulting in overall higher technology acceptance. This is particularly the case in, Scenario 2, which relies on the adoption of an LCD screen,
March 15, 2010
450
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al.
where customers in Greece with high in-group collectivism tend to perceive greater ease of use compared to their counterparts in Ireland with low in-group collectivism. They also perceive significantly greater usefulness. Thus, the data support Propositions 4A and 4B. According to Proposition 4A, customers in countries characterized by high in-group collectivism are likely to perceive greater ease of use, thus perceiving greater usefulness. This should lead to a greater willingness to use and a more positive evaluation of the technology and of the service offered, thus resulting in overall higher technology acceptance. By contrast, according to Proposition 4B, customers in countries characterized by low in-group collectivism are likely to perceive less ease of use, therefore perceiving less usefulness. This should lead to less willingness to use and a more negative evaluation of the technology and the quality of the service offered, resulting in overall lower technology acceptance. Gender egalitarianism affects technology acceptance. Our findings illustrate the extent to which customers in countries characterized by high gender egalitarianism tend to perceive greater ease of use and hence, greater usefulness. This leads to a greater willingness to use and a more positive evaluation of the technology and service offered, resulting in overall higher technology acceptance. This is particularly the case in relation to Scenario 2, which relies on the adoption of an LCD screen, where customers in Greece with high gender egalitarianism tend to perceive greater ease of use compared to their counterparts in Ireland with low gender egalitarianism. They also perceive significantly greater usefulness. Thus, the data support Propositions 5A and 5B. According to Proposition 5A, customers in countries characterized by high gender egalitarianism are likely to show a greater willingness to use and a more positive evaluation of the technology and service offered, thus resulting in overall higher technology acceptance. By contrast, according to Proposition 5B, customers in countries characterized by low gender egalitarianism are likely to show less willingness to use and a more negative evaluation of the technology and service offered, resulting in overall lower technology acceptance. Humane orientation affects technology acceptance. Our findings illustrate the extent to which customers in countries characterized by low humane orientation tend to perceive greater ease of use and hence, greater usefulness. This leads to a greater willingness to use and a more positive evaluation of the technology and of the service offered, resulting in overall higher technology acceptance. This is particularly the case in relation to Scenario 2, which relies on the adoption of an LCD screen where customers in Greece with low humane orientation tend to perceive greater ease of use compared to their counterparts in Ireland with high humane orientation. They also perceive significantly greater usefulness. Thus, the data support our Propositions 9A and 9B. According to Proposition 9A, customers in countries characterized by low humane orientation are likely to show greater willingness to use and a more positive evaluation of the technology and service offered, thus resulting in overall higher technology acceptance. By contrast, according to Proposition 9B, customers in countries characterized by high humane orientation are
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Customers’ Acceptance of New Service Technologies
451
likely to show willingness to use and a more negative evaluation of the technology and of the service offered, resulting in overall lower technology acceptance. Our findings support the validity of five propositions out of nine (i.e., Propositions 1, 3–5, and 9) whereby uncertainty avoidance, institutional collectivism, in-group collectivism, gender egalitarianism, and humane orientation significantly affect the readiness to adopt, use, and evaluate new service technology. Although the framework of new service technology acceptance previously described appears most applicable in Scenario 2, which relies on the adoption of an LCD screen and obtains across the above five cultural dimensions, some divergent evidence emerges in the case of RFID. As previously discussed, this may be explained by the fact that customers in Greece are significantly more concerned with personal information collection, while in Ireland customers seem more concerned with its unauthorized use. 7. Conclusions, Managerial Implications, and Directions for Future Research The proposed technology-based service quality framework provides a cross-cultural context in an area where little previous research has been done. Although the framework and its associated propositions require further testing, we believe that it provides a better understanding of TR and SQA from a cross-cultural perspective and can provide valuable insights on how to improve the “ergonomic match” (Ahasan and Imbeau, 2003) between customers and the service offering. Consistent with previous research, our findings contribute to the debate on customers’ acceptance of new service technologies by substantiating the argument by which technology acceptance is contingent on “cultural affinity” (Phillips et al., 1994). In particular, our findings illustrate that low uncertainty avoidance, low institutional collectivism, high in-group collectivism, high gender egalitarianism, and low humane orientation (i.e., Greece) seem conducive to greater customer acceptance of new service technologies. Although positive evaluations of service technologies obtain across all three scenarios, the explanatory power of the framework is greatest for the mid-range scenario, which relies on the use of the LCD screen and provides a moderate amount of information vis-`a-vis the other two options considered. Surprisingly, the framework fails to explain the greater customer acceptance of RFID in a high uncertainty avoidance, high institutional collectivism, low in-group collectivism, low gender egalitarianism, and high humane orientation society (i.e., Ireland). This apparent contradiction raises the interesting question of whether in Greece the overall positive customers’ acceptance of dynamic pricing that utilizes an LCD screen might be actually hindered by making the customers aware of its reliance on RFID technology. Our findings illustrate the extent to which customers in Greece are significantly more concerned by personal information collection than their counterparts in Ireland. Alternatively, we could also question whether in Ireland the overall
March 15, 2010
452
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al.
less positive customers’ acceptance of dynamic pricing that utilizes an LCD screen might be actually improved by making customers aware of its reliance on RFID technology. Our findings show for instance that customers in Ireland are significantly less concerned about personal information collection than their counterparts in Greece, although they seem significantly more concerned by its unauthorized use. These different privacy concerns would be consistent with the presence of high in-group collectivism in Greece and high institutional collectivism in Ireland as documented in the literature (Shore et al., 2001). Although the emergence of new service technologies seems to evoke a process of spatial convergence, the results of this study support the argument that some national cultures are more amenable to accepting new service technologies than others. Different dimensions of national culture have facilitating or inhibiting consequences on the customers’ acceptance of new service technologies. Although technology acceptance may be viewed as a vehicle for change, research indicates that national culture is highly resistant to change (Hofstede, 1980). Thus, although new service technologies can be easily changed, the fundamental values that underlie the customers’acceptance of those technologies are very difficult to change. This would suggest a strong need for retailers to adapt their new service technologies to the local national culture to improve “the ergonomic match” and thus ensuring greater customer acceptance. This does not necessarily entail compromising the integrity of their marketing practices. Rather, they should employ new service technologies that can be most effectively implemented in the local culture. While the results have clearly indicated that there is a compelling rationale for taking cultural dimensions into consideration to assess customers’ acceptance of new service technologies, several limitations need to be addressed in future research to strengthen the generalizability of our findings. First, the framework needs to be tested in relation to a range of countries displaying more divergent cultural dimensions. In this study for example, both Greece and Ireland present the same high/low score pattern for uncertainty avoidance, institutional collectivism, and humane orientation. Although the findings provide some indications of the individual impact of cultural dimensions, they do not take into account the interplay between concomitant cultural dimensions. Second, the framework needs to be assessed in relation to a broader range of technologies, which should differ in terms of degree of maturity (i.e., the LCD screen is a more mature technology that is more widely available in stores than PDAs), the degree of technological involvement of the customer (i.e., very low with the electronic shelf-tag, while it is very high for the use of a PDA), and the amount of information associated with the service. Overall, in demonstrating the application of our conceptual framework to the assessment of customers’ readiness to adopt, willingness to use, and evaluation of RFID technology and any consequent influence on customers’ behavior, this study suggests that the cross-cultural framework developed in this study may have broader applicability in the case of the assessment of customers’ response to other discontinuous and revolutionary technologies.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Customers’ Acceptance of New Service Technologies
453
References Ahasan, R and D Imbeau (2003). Work Study, 53(2/3), 68–75. Ajzen, I and M Fishbein (1980). Understanding Attitudes and Predicting Social Behavior. New Jersey: Prentice-Hall. Bagozzi, RP, FD Davis and PR Warshaw (1992). Human Relations, 45(7), 660. Davis, FD, RP Bagozzi and PR Warshaw (1989). Journal of Applied Psychology, 22, 1111. Dabholkar, PA and R Bagozzi (2008). Journal of the Academy of Marketing Science 30(3), 184. Davis, FD (1989). MIS Quarterly, 13, 319. Davis, FD (1993). International Journal of Man-Machine Studies, 38, 475–477. Davis, FD, RP Bagozzi and PR Warshaw (1989). Journal of Applied Psychology 22, 1111. Erez, M and PC Earley (1993). Culture, self-identity, and work. In E-Service: New Directions in Theory and in Practice, Rust, RT and PK Kannan (eds.), New York: Armonk. Fishbein, M (1980). A theory of reasoned action: Some applications and implications. In Nebraska Symposium on Motivation, p. 65. Lincoln, NE: University of Nebraska Press. Gronroos, C (1982). Eur. Journal of Marketing, 12(8), 588. Henderson, R and MJ Divett (2003). International Journal of Computer Studies, 59(3), 383. Hofstede, G (1980). Culture’s Consequences: International Differences in Work-related Values. London: Sage. House, RJ, PJ Hanges, M Javidan, PW Dorfman and V Gupta (2004). Culture, Leadership and Organizations: The GLOBE Study of 62 Societies. Thousand Oaks, CA: Sage. Janczewski, LJ (1992). Relationships between information technology and competitive advantage in New Zealand businesses. In: Proceedings of 1992 Information Resources Management Association Charleston, p. 347. Kedia, BL and RS Bhagat (1988). Academy of Management Review, 13(4), 559. Lachman, R (1983). Human Relations, 36, 563. Lin, JC and PL Hsieh (2005). Assessing self-service technology encounters: Development and validation of SSTQUAL scale. In Paper presented at Taiwan Institute of Marketing Science Annual Conference, Taipei, Taiwan. Lin, JC and P Hsieh (2006). International Journal of Service Industry Management, 17(5), 497. Lutz, RJ (1991). The role of attitude theory in marketing. In Perspectives in Customer Behavior, Kassarjian, HH and TS Robertson (eds.), Englewood Cliffs, NJ: PrenticeHall. Mathieson, K (1991). Information System Research, 2, 173. Matthing, J, P Kristenssan, A Gustafssan and A Parasuraman (2006). Journal of Service Marketing, 20(5), 288. Meuter, ML, AL Ostrom, RI Roundtree and MJ Bitner (2000). Journal of Marketing, 64(3), 50. Parasuraman, A, V Zeithaml and L Berry (1999). Journal of Marketing, 49(4), 41. Parasuraman, A (2000). Journal of Service Research, 2(4), 307. Parasuraman, A and C Colby (2001). Techno-Ready Marketing: How and Why Your Customers Adopt Technology. New York: The Free Press. Parasuraman, A, VA Zeithaml and A Malhotra (2005). Journal of Service Research, 7(3), 213. Pare, G and JJ Elam (1995). Behaviour and Information Technology, 14, 215.
March 15, 2010
454
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
A. Vecchi et al.
Parsons, T and EA Shils (1951). Towards a General Theory of Action. Cambridge, MA: Harvard University Press. Phillips, LA, R Calantone and M Lee (1994). Journal Business & Industrial Marketing, 9(2), 16. Ryan, MJ and EH Bonfield (1975). Journal of Consumer Research, 2, 118. Shih, Y and K Fang (2004). Internet Research, 14(3), 213. Shore, B and V Venkatachalam (1995). Journal of Global Information Management, 3(3), 5. Shore, B, AR Venkatachalam and E Solorzano (2001). Technology in Society, 23, 563. Straub, D, M Keil and W Brenner (1997). Information Management, 33, 1. Tiskriktis, N (2004). Journal of Service Research, 7(1), 42. Triandis, HC (1995). Culture: Theoretical and methodological issues. In Handbook of Industrial and Organizational Psychology 4, Dunnette, MD and L Hough (eds.), Palo Alto, CA: Consulting Psychologists Press. Vecchi, A, S O’Riordain and L Brennan (2007). The SMART project: Intelligent integration of supply chain processes and customer services — A retail perspective. In Proceedings of the Symposium on Production, Logistics, and International Operations (SIMPOI) at the Annual Conference of the Production and Operations Management Society (POMS), Rio De Janeiro, Brazil, August 8–10, 2007. Vecchi, A and L Brennan (2008). Supply chain innovation for short life products: RFID deployment and implementation. In Proceedings of the European Operations Management Association (EUROMA) Annual Conference, Groneningen, The Netherlands, 15–18 June 2008.
Biographical Notes Alessandra Vecchi is currently a Research Fellow based in the School of Business and in the International Institute of Integration Studies (IIIS) at Trinity College. She is working on the SMART Project which aims to support intelligent business networking and consumer services based on effective and efficient information/knowledge sharing and collaboration across supply chain partners, capitalising on the fact that products are uniquely identified with the use of the RFID technology. She teaches at the undergraduate and graduate level in the areas of International Business, Economic Sociology and Globalisation Studies. Her research interests span from competitive advantage, value creation and strategic upgrading of industrial clusters to cross-cultural management and supply chain management. Louis Brennan is an Associate Professor in Business Studies in the School of Business Studies at Trinity College, Dublin and a Research Associate of the Institute for International Integration Studies at Trinity College. His teaching, research and consulting interests are in international business, global supply chains, operations strategy and the design of emergent work systems. He has published extensively in his areas of interest. He is the Irish co-ordinator of the International Manufacturing Strategy Survey Network. He has worked at a number of universities in Asia, Europe and the USA.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
Customers’ Acceptance of New Service Technologies
455
Aristeidis Theotokis is a PhD student at the Department of Management Science and Technology of Athens University of Economics and Business (AUEB). He is also research officer in the ELTRUN e-business Research Center. He holds a BSc in Industrial Management and Technology from the University of Piraeus and an MSc in Operational Research from the School of Mathematics of University of Edinburgh. His doctoral thesis investigates consumer acceptance of technology-based services in retailing based on a service science approach. He has presented his work in several academic conferences and practitioner workshops. He has been awarded the best paper award in the European Conference of Information Systems and he has published his work in the European Journal of Marketing and the European Journal of Information Systems.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch18
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Chapter 19
Operational Efficiency Management Tool Placing Resources in Intangible Assets CLAUDELINO MARTINS DIAS JUNIOR∗ , OSMAR POSSAMAI† and RICARDO GONC ¸ ALVES‡ Departamento de Engenharia de Produ¸ca˜ o, Universidade Federal de Santa Catarina (Brazil) and Departamento de Engenharia Eletrot´ecnica, Universidade Nova de Lisboa (Portugal) Wlademiro do Amaral Street, 132, Vila Amaral, Dourados, Mato Grosso doSul, Brazil, Zip code 79814-022 ∗ [email protected] or [email protected] † [email protected] ‡ [email protected]
This study demonstrates the importance of considering intangibles in the context of internal organizations, starting from the acknowledgment of the premise that these activities contribute to the formation of the structures of a production line with the competitiveness requirement. Keywords: Intangible assets; performance indicators; operational efficiency.
1. Introduction We look and search for better levels of utilization of organizational assets, in the internal business organization that have become imperative, as it can increase the performance of manufacturing units to reach strategic objectives, provided that the compatibility of the objectives can be shown to improve the levels of acceptance of the products offered, and by the margins of invoicing because of increasing efficiency in the activities of fabrication of wealth and services. In parallel with these organizational objectives, it is imperative that they follow an analysis of manufacturing efficiency; in this manner, we look to justify the intangible investments and how this will necessarily increase operational efficiency. Considering that the strategic objectives of the organization can be monitored by means of performance indicators we propose to develop a number of specific 457
March 15, 2010
458
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
Stage 1 (preparation stage) analyzes the product portfolio
Step 1 defines levels of product attraction Step 2 prioritizes strategic products
Step 3 determines the value of intangible assets related to R&D SPPs
Stage 2 identifies the intangible assets from the SPPs*
Step 4 identifies and determines the value of internal intangible assets SPPs Step 5 calculates efficiency indexes for SPPs
Stage 3 establishes the hierarchical manufacturer objectives
Step 6 determines the manufacture objectives for production SPPs
Step 7 manufactures sector objectives
Step 8 proposes indicators related to IIA for the manufacture sector PIIAAs**
Stage 4 proposes the PIs related to IIA for manufacture sectors
Step 9 establishes degrees of importance for IIAs
Step 10 calculate the degree of margin contributed of SPPs
Stage 5 establishes the criteria for application of resources in IIAs
Step 11 revises the elements of critical IIAs
Step 12 prioritizes the allocation for resources of elements IIAs
∗ SPP, strategically prioritized products; ∗∗ PIIAA, performance indicators for inter-
nal active assets. Figure 1. assets.
Operational efficiency management tool placing resources in intangible
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
459
indicators for internal intangible assets (IIA) to the organization. These indicators should permit the verification of the impact of these assets on the increase in operational efficiency. The methods of obtaining the principle results are a reference to the use of performance indicators and relate to IIA on the definition of allocation of financial resources in these assets for the betterment of the efficiency indices in the manufacturing unit.
2. General Management Tool Presentation The general objective of this study is to propose a management tool of operational efficiency from the allocation of resources in intangible assets. The stages and steps are represented in the form of movements in Fig. 1 that following the execution logically. Notice how it is necessary that the feedback occurs between steps/stages/steps, which are propitious for better visibility of the objectives which the tool looks to fill. In Stage 1 (Preparation Stage), the objective is to describe a form to analyze the portfolio of business products to identify which groups of products bring a major contribution to the business. Stage 2 identifies the intangible assets joined to the considered strategic products, and consequently they should be prioritized. Stage 3 is established and manufacturers’ objectives are hierarchically organized, taking into consideration the operational efficiency index for the strategically prioritized products. Stage 4 proposes performance organizational activities, which are related to intangible assets. Stage 5 establishes the allocation of resources in intangible assets which have been considered to be critical.
3. Tool Application Context The tool was used in industries dealing with the metal and mechanical production sectors. It is characterized as a world-class leader in its product segment, with around 3500 products and 2500 employees supplying predominantly a national market, including Latin America and entering in the European market in the last few years.
4. Tool Application For the effective application of the tool, we always tried to explain the objectives of the study to each involved director on the basis of methodology and hoped for the results to cause and promote a reflection on the contribution that this study could bring to the development of this area of activity and to the company as a whole.
March 15, 2010
460
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
4.1. Stage 1 (Preparation Stage) Analyzes the Product Portfolio MassFerro The company researched maintains its structure as a business unit responsible for the development of new product lines, which complement its actual business range, considering these same product lines represent a significant part of the turnover over the last 3 years. Therefore, it is proposed to establish a representative of these product lines, a perspective of the functional directors, starting with a perspective of the teams responsible for the management tool functioning. This team is composed of a marketing director, product development director (representing the planning director), planning and control of production (PCPM) director, stages director, and quality control director. The quality control director comes to increase his knowledge of other technical factors, which could be related to a variable market tendency of the market positioning of MassFerro products. Step 1 — Defines the levels of product At an initial meeting with the management tool team, with the presence of the new products development manager who represented the director of planning, the group exposed, principally the MassFerro, in the ultimate years, have dedicated themselves to the exploration of external consumer markets by way of developing new product lines linked to the area of mechanical metal. However, it was noticed that these products did not represent an essential part of the company, as these markets have a high level of worldwide competitiveness showing the possibility of significant profit in the future. Step 2 — Prioritizes the strategic products For preliminary analyses three product lines preliminarily selected by the management tool the line called “Diversive” considered that this line represented products with characteristics related to the life cycle, present in other diverse lines maintained by MassFerro. However, they represent a significant part within the total scope of products and market share. It was decided as a basis for study divertive, which allows for the group analysis of products which constituted the matrix (drivers), this line showed eight products to be evaluated by market positioning. As a result, the strategic products considered for analysis were: P200, PMAT, P10, P25, P900, PMAX, PBV15, and P30. It is reminded that these products have different invoicing, but as a group, they have functional characteristics and within the context they are common products. As a way of showing positional dispersion of the products analyzed by the marketing director and the quality director, it was opted for the multiplication of values and each factor composed the line “market tendency on the positioning map” and by a new scale of values to prioritize the classifications (decline 0.20
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
461
Mapa de Posicionamento de Produtos–Faturamento Real 12.0
P30 (5.2%; 10.0)
10.0
Market tendencies
8.0 P25 (15.3%; 7.4) P900 (17.6%; 6.8)
6.0
PMAX (6.3%; 4.5)
4.0 P200 (15.9%; 3.3)
2.0
PBV15 (6.4%; 1.4)
P10 (15.2%; 1.4) PMAT (18.1%; 0.0)
0.0 –20.0% –16.0% –12.0% –8.0% –4.0% 0.0%
4.0%
8.0% 12.0% 16.0% 20.0%
% Profit
Figure 2.
Product positioning map of MassFerro — expected profit.
points; constant — 3.00–6.99 points and in ascendance 7.00–10.00 points) work line invoice values expected and real (for each product line selected) it has to be stated the adaptive procedure on the point scale graphic of map positioning (see Fig. 2, expected invoicing and Fig. 3, real invoicing) nothing altered the resolution of the instrument as to the initial proposed objective. Stage 2 looks to identify the intangible assets present in the context of development and fabrication of SPPs (P200 and P900) and the calculation of indexes of efficiency from a quantitative and qualitative value of these same assets. 4.2. Stage 2 — Identification of Intangible Assets by SPPs By observing the product positioning map (real profit) of MassFerro (see Fig. 3) and considering that the products P200 and P900 (SPPs) represent two of eight products that sustain the competitiveness of the divertive line, this leads to a definition of development of production of MassFerro, which contributes to the competitiveness of these same products. Step 3 — Determination of the value of intangible assets related to P and L of the SPPs Considering the expenses for the development and conception of production lines, the value is seen as a clearer representation of the importance of intangible assets
March 15, 2010
462
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
Mapa de Posicionamento de Produtos–Faturamento Real 12.0
P30 (–6.2%; 10.0)
10.0
Market tendencies
8.0 P25 (–18.1%; 7.4) P900 (10.6%; 6.8)
6.0 PMAX (–5.9%; 4.5)
4.0 P200 (11.4%; 3.3)
PBV15 (–7.6%; 1.4)
2.0 P10 (3.8%; 1.4) PMAT (7.9%; 0.0)
0.0 –20.0% –16.0% –12.0% –8.0% –4.0% 0.0%
4.0%
8.0% 12.0% 16.0% 20.0%
% Profit
Figure 3.
Product positioning map of MassFerro — real profit.
and the qualitative values aggregate in the context of internal manufacturing of the company and, consequently, on the remaining products that compose the divertive range. It was decided to map the variable quantity, in this case represented by investments, in P and L. It is expected to calculate the aggregate value of the SPPs by an analysis. The investments represented for P200 (R$ 168,000.00) and P900 (R$ 90,000.00) are the actual commercialization prices of produced units, which are P200 (R$ 3,296.24; 4854 units) and P900 (R$ 9,835.00; 519 units). The total expenses of production remained about R$ 1,130.80 and R$ 2,730.30 and yet the time taken for development and commercialization was about 2 years for P200 and 7 years for P900; therefore, the calculation of the VPL investment and the value obtained during the period of commercialization, resulting in final UPL related to the SPPs, are shown in Table 1. Table 1. VPLs of Investment, Receipt of VPL and VPL Final (Traditional) of P200 and P900.
VPL P200 VPL P900
VPL investment
Receipt of VPL
VPL final (traditional)
157,435.50 790,745.61
16,077,141.72 12,750,178.72
15,919,706.22 11,959,433.11
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
463
In front of these divergences in the context of development and commercialization of each one of the products, could you look in to (equalize) the conditions of equal time cycles? However, the that effect influences directly the calculation of the VPL and consequently the estimated values of IIA related to the manufacturing of these products. Considering the calculation of the final VPL, “traditional” SPPs finish the quantitative perspectives of the value of related organizational assets of R&D; therefore, you have to look at the qualitative value of the SPPs. Step 4 — Identification and determination of the value of IIAs Step 3 determines the quantity value of associated organizational assets to the development of production processes of new products. Now, this step looks to determine which of the IIAs will be present in the context of manufacturing of MassFerro, by means of connective perception of the teams involved in Step 3. The driving team of the management tool, helped by the perception of project development manager and the production manager, defined the qualities of variables, which by extension represent the IIAs of the related manufacture to P200 and P900 of the divertive range as follows: • • • • • •
Labor power related to the produce Work process development Prime material availability Supplier confidence Alternative contracts of supply Ease of product construction
We now move on to the representative percentage of the IIAs described, to confirm the quality of the value these assets have in relation to P200 and P900. The context of manufacturing which in this case is represented by the calculation of the VPLs and Geske subtracted from VPL investment. To all effects, the calculations shown in the Geske formula was used by a volatile tax (σ) of 0.3 and the critical value of the project (Fc ) were estimated approximately by 76% of the actual cash flow (F ), which is destined to the commercialization of each product. We still consider the calculation of the distribution function normally accumulated by univariables (N) and bivariables (M). The average and the out of patronized coefficient used were represented, respectively, by 0 to 1 (patronized distribution). The data of maturity for the first option of R&D of P200 and P900, informed by the director of product development, were seen respectively for 0.5 a year and 1 year. The discount tax of risk (r) adapted was the same calculation for VPL traditional and VPL of Geske have the same reference practiced by the Central Bank of Brazil, this being 11.25% (in March 2007).
March 15, 2010
464
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
Table 2. Comparison Between VPL of Geske and VPL Traditional of P200 and P900. VPLs PEPs
P200∗
P900∗
Traditional Geske
15,919,706.22 15,175,845.45
11,959,433.11 11,035,370.63
∗ Monetary values are in real $.
Table 3.
Description of Participation Percentage of the Items of Fabrication of SPPs.
% Participation percentage of the items of fabrication of SPPs (P200 and P900)
P200 (%)
Prime material Engineering process and quality Direct labor Depreciation
74.90 9.89 5.79 2.77
83.88 6.63 3.77 1.79
Labor Group General costs of fabrication Staff Electric energy External labor
25.10 1.14 0.99 0.74 3.78
16.12 0.67 0.54 0.38 2.34
P900 (%)
The calculated value of the IIAs related to P200 and P900 is shown in Table 2. It was represented by VPLs and Geske. We can now project a participation of the intangible assets in two large groups such as “prime-material” and “labor” (see Table 3). It is estimated that P200 and P900 represent R$3,809,137.20 and R$1,778,901.75, respectively, in economic resource terms represented by intangible assets. This step defines which IIAs that are present in the internal context of manufacturing determined by these assets in economic value expressed by VPL of Geske and, consequently, attributed involved by a vision of the managers of the actual value on IIAs related to the analyzed products. The labor group characterized the orientation of investments of MassFerro was seen and represented proportionally by subgroups and proportioned a dimension of the amount invested in each item. Step 5 — Calculate the index of efficiency for SPPs This step is a result of the calculation of both quantitative and qualitative organization values directly related to intangible assets present in the context of product development of the MassFerro. However, knowing that the SPPs (P200 and P900)
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
465
belong to the same portfolio or products are joined to structures of similar production and/or complementary in function of the characteristics or the other products, these two products distinguish themselves by different cost differences; fortunately, it is to equally say that a comprehension of the sections and/or groups or activities of production, (departmentalization) in the MassFerro, obey different perspectives being one of structure of processes of production, and the other of conception of associated costs of products manufactured. By this way, we search to calculate the sectorial indexes of efficiency, remembering to take into consideration the tangible and intangible value of each SPP under analysis, having a base price of commercialization of P200 and P900 at the ratio of R$ 3,296.24 and R$ 9,235.00, respectively, and still redefining the percentage participation of appropriate operations in each sector. The tangible cost estimated for P200 and P900 is around R$ 1,130.43 and R$ 3,036.05, respectively, and the quantity of tangible resources employed were represented by the annual production of P200 (4.854 units) and for P900 (519 units). We only need to watch that the percentages encountered for a determined sector of product efficient manufacture of each SPP, shown in Table 4, success a re-reading of attentions of the management of PCPM and the respective operations of production in timing programmed of production necessary in each operation. However, in the meantime, we need to be aware of the percentages encountered in the block manufacture sector (S2) in the larger part of the times function as an appendice of the manufacturing sector (S1) but individually analyzed for the other sectors, which are part of the manufacturer MassFerro, the indexes of operational efficiency encountered have the same determination closer to the possible real situation of production, with the necessary mobilization of them resource as an effective calculation of sectorial efficiency considering the value of tangible and intangible employees in the fabrication of SPPs. In Stage 3, the importance of each of these assets reaches the organizational assets.
Table 4. Demonstration of the Indexes of Efficiency and Manufacture of the Company for Each SPP. PEPs
Sector of manufacture
Indexes of efficiency
P200 P900
Build (S1)
0.70 0.20
P200 P900
Building blocks (S2)
0.17 0.02
P200 P900
Reservatory (S3)
0.99 0.45
P200 P900
Logistics (S4)
0.92 0.33
March 15, 2010
466
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
4.3. Stage 3 Establishes Hierarchies of Sectorial Objectives of Manufacture This stage makes calculations of operational efficiency, indexes for each sector involved in the development and fabrication of products. Therefore, to determine the internal objectives of the manufacture sector of MassFerro, to propose a hierarchical process of fabrication of the products P200 and P900, used for format to improve the efficiency indexes a tendency obtained by the production process of these SPPs.
Step 6 — Determination of the objectives of manufacture for production of SPPs In Stage 2 (Step 4), we determined the economic values, with reference to the participation or IIAs in the production or P200 and P900. However, we still consider that not only this class of assets is responsible for the creation or value of internal context of manufacture; we now look to determine the existence of other internal variables (described in the manufacture objectives), which could determine an improvement of the operational efficiency indexes obtained by the manufacturing sectors of (S1), block manufacture (S2), reserves (S3), and logistics (S4) in the manufacturing of products P200 and P900. It needs to be reiterated that the researched company looks to reach the strategic objectives of manufacture, including the results established for these same contexts, show indicators of performance both internal and external, defined as the more prepared by direction, as perceived by the planning manager, and the quality director manager. Though Table 5 proposes a designation and description, the common acquirement in the context of internal manufacture of MassFerro, as an initiation to Table 5.
Description of the Objectives of Manufacture of MassFerro.
Designation of the objectives MO1 MO2 MO3 MO4 MO5 MO6 MO7 MO8 MO9 MO10 MO11 MO12
Objectives of manufacture (MOs) Parks the modernization Improves the production Improves the delivery times Trains the labor Computerizes the production Develops the business partners Contracts the personnel Increases the stock of finished product Decreases the intermediate stock Increases the production volume Increases the performance Decreases the set-up time
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
467
orientate production management, is to be responsible for each sector’s description and its priorities. This is a possible fact, represented by the number of operatives, that these sectors aggregate in determination of context diversive production line of the company. Consider that the manufacture objective is to “develop partners” transformed in sector objective (SO) in two occasions for P200 and P900. Consider the establishment or indicators of performance uniquely related to the logistics sector, which will be targeted to reach the objective (SO2;5). The hierarchical internal MOs are described in Table 6, considering the perceptions of the production manager and sector responsibilities that define the level of priority of these objectives, attributing weight (4th column) in a hierarchical order for these same objectives (5th column), which are transformed in SOs in agreement with the efficiency indexes sectorially obtained in Step 5 for each of the SPPs analyzed. The remaining objectives of manufacture (MO1 , MO2 , MO5 , MO7 , MO8 , MO11 , and MO12 ) at the present time were not tested as strategic in the perception of the production manager and sector colleagues, by the fact that the same perceptions are contemplated in the parallel form, to implantation or a logical production based on “Lean Manufacturing.” Step 7 — Establish the sector objectives of manufacture This step’s objective is to sectorize the MOs of MassFerro, considering that these objectives constitute internal variables linked to improvement of the indexes of operational efficiency, observed in Step 6. By this way, the motion team of the management tool seen by the perception of the production manager proposed to transpose the MOs to the respective sector (operational) related with fabrication of the SPPs (P200 and P900), these being manufacture (S1), block manufactured (S2), reserves (S3), and Logistics (S4). Table 6. Prioritize strategic products SPPs P200
P900
Determining the Hierarchical Manufacture Objectives of MassFerro.
MO
Description of manufacture objectives (MOs)
Level of priorities MOs (Weight)
Hierarchical of MOs
MO3 MO4 MO6 MO9 MO6 MO10
Decreases the delivery times Qualifies the work force Develops the business partners Decreases the intermediate stock Develops the business partners Increases the production volume
9 6 9 9 8 10
3◦ 6◦ 4◦ 2◦ 5◦ 1◦
March 15, 2010
14:45
468
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
Table 7.
SPPs
Establishing the Sector Objectives of Manufacture of MassFerro.
Manufacture objectives (MOs)
P200 • Decrease the delivery times (MO3 ) • Qualify the work force (MO6 ) • Develop the business partners (MO4 )
Sector of manufacture (operations)
Manufacture sector objectives
Build (S1) Building of blocks (S2) Reservatories (S3) Logistics (S4)
SO6 — — SO3 , SO4
P900 • Decrease the intermediate stocks (MO2 ) Build (S1) Building blocks (S2) • Develop the business partners (MO5 ) Reservatories (S3) • Increase the volume of production (MO1 ) Logistics (S4)
SO1 — — SO2 , SO5
Each of the MOs determined and hierarchically organized in Step 6 and transformed in sectorial objectives (SO) keeps a relation that waits to reach the improved actual efficiency index obtained in Step 5 for each SPP (see Table 7). See how the SOs determined remain evident in the importance of the participation of the manufacture and logistics sector in the conduct of the objectives of manufacture. 4.4. Stage 4 — Proposition of PIs Related to the IIAs for the Manufacture Sectors The objective of this stage is to propose indicators related to IIAs responsible for the dynamic value prime definition and importance that each of these indicators within the context of manufacture starting from verification related to the sector objectives defined in Step 7. Step 8 — Proposition of PIs related to IIAs for the manufacture sector Taking as an initial reference the SOs determined in Step 7, the present step establishes a relationship of these objectives of the PIs usually utilized to measure the performance of the production activity, the determination of this relationship was obtained by the individual perceptions of the planning director and the production manager, due to the temporary non-availability to analyze the integrated team conducting the management tool. By this way, we describe in Table 8 the PIs of manufacture, which relates to the objectives described in Step 7. It needs to be stated that not all the PIs proposed in Table 8 are encountered at this present moment, consolidated inside the unit of manufacture management of MassFerro. However, from the start of the inclusion in a logical monitoring to
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
Table 8.
SPPs P200
Identification of Indicators Related to Performance IIA (PIIAA’s).
Sector objectives SO1 increases the volume production
Performance indicators
Relationship of PI with IIAs
Performance Indicators for IIAs–PIIAAs
Index of supply quality
No
—
Participation of new production profit
Yes
Participation of new production profit
Lead time in dealing with complaints
No
—
Index of rework
Yes
Index of rework
SO2;5 develops the business partners
Non-quality costs
Yes
Non-quality costs
Generated residuals
No
—
SO6 qualifies the work force
Degree of the poly-validity and poly-competency
Yes
Degree of the poly-validity and poly-competency
Productivity
Yes
Productivity
Attend build program
No
Stock turnover
Yes
SO2 decreases the delivery times
P900
469
SO4 decreases the intermediate stocks
— Stock turnover
reach the sector objectives, it is hoped that it can serve as reference indicators of PIIAAs considering the relationship with IIAs indicated in Step 4. Step 9 — Establishes the degree of importance of PIIAAs This step determines the degree of importance of each of the objectives of manufacture, the same defines the degree of importance for PIIAAs to be reached by each of the OSs determined in Step 7. Similarly, to define the degree of influence of each IIA over the PIIAAs determined in Step 8, to intuitively verify the sector influence in manufacture. This way, it describes a matrix of relationship between sectorial objectives manufacture indicators of performance for IIA (see Table 9). From Table 9, the manager of PCPM can determine the degree of influence of IIAs to reach the objectives of manufacture and the establishment of degrees of influence for each indicator related to these assets. We can obtain the level of
Relationship Matrix Between SOs and PIIAAs of MassFerro. 470
Logistics
Building blocks 0
Reservatories
Build 1 3
3 9
6 18
30
4.90
3
SO2 decreases the delivery times
6
Indices of rework
3 18
3 18
6 36
3 18
90
14.71
SO2;5 develops the business partners
3
Non-quality costs
1 3
1 3
3 9
3 9
24
3.92
SO6 qualifies the work force
9
Degree of the poly-validity and poly-competency
9 81
3 27
1 9
3 27
144
23.53
9
Productivity
9 81
6 54
3 27
3 27
189
30.88
9
Stock turnover
3 27
3 27
3 27
6 54
135
22.06
213
129
117
153
612
—
34.80%
21.08%
19.12%
25.00%
—
SO4 decreases the intermediate stock
Relative weight (
sector)
Importance of sector level to be acheived MOs
100
b778-ch19
SO1 increases the production performance-
SPI-b778
P900
Participation of new product profit
Relative Weight PIIAA
WSPC/Trim Size: 9.75in x 6.5in
P200
Degree of importance of IIAs
Important levels of PIIAA for manufacturing sector
14:45
SPPS
Sector objectives of manufacturing
Performance indicators related to IIAs (PIIAAs)
C. M. Dias Junior et al.
Sectors of manufacture
March 15, 2010
Table 9.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
471
importance of each sector indicator, which directly contributes to reach the MOs established and hierarchical in Step 6. Respectively, each manufacturing sector presents different levels of involvement with the PIIAA and the following reaches of the MOs (transforming in SOs in Step 7) being the manufacture (34.80%), logistics (25%), block manufacture (21.08%), and reserves (19.12%) to analyze the PIIAAs extended to all of the members which were the conduct team of the tool by the PCPM manager, who suggested that the major significance in the context is related by extension to P900 (SPP analyzed that represents the largest degree of (profitability for MassFerro). Degree of the poly-validity and poly-competency (23.53%), male productivity (30.88%), and stock turnover (22.06%) demonstrate the level of importance of each IIA in the formation of each PIIAA, defining its degree of influence (GLIIA) sectorally. We start to consider the percentage contribution presented for each PIIAA described in Table 9. We looked to define the weight relative to these indicators, individually, in each sector of manufacture considered (manufacture, block manufacture, reserves logistics). How to determine relative weights and levels of contribution for each sector. The largest contribution constitute was “labor integrated to product “(weight of 17.12 points). The IIAs “development of the work process”, “availability of the prime material” and “confidence of supply” has contributions very close to all with weight close to is points. The contribution of “ease of product manufacture” appears in third place with a weight close to 12 points and “contract creation alternatives of supply” has the lowest contribution for the formation of the GLIIAs with a weights of 7.57 points. The GLIIAs follow a tendency defined by weight which defines them, staying around 21.45% for the IIA “labor integrated to the product”, 18.13% for development of the work process, 18.05% for “availability of prime-material”, 17.96% for “confidence of supply”, 9.49% for “creation of alternative contracts of supply”, and still IIA “ease of product manufacture” 14.92%. The PIIAAs which relieve the major level of contribution of the IIAs relieve the major level of contribution of the IIAs are present in the sectors of manufacture (human productivity/GLIIA — 13.93%); logistics (stock turnover/GLIIA — 12.43%; human productivity/GLIIA — 10.45%) and block manufacture (human productivity GLIIA — 9.28%). Step 10 — Calculates the degree of criticality and margins of contribution of IIAs The objective to define which IIAs have higher levels of criticality and consequently a relation with the contribution of more significance for the sectors which compose of a unit of manufacture, are determined in this step which PIIAA present marginal contributions relevant to establish a degree of criticality of the IIAs (see Table 10). This analysis was executed again by means of delegation, in the competence of the
3.6029
0.1288
2.0588
0.0883
4.2941
0.1842
18.13
4.3824
0.2350
2.4853
0.1066
2.4559
0.1054
5.1029
0.2189
18.05
4.2353
0.1514
4.0000
0.1716
1.6176
0.0867
4.5588
0.1630
17.96 9.49
3.8529 5.8529
0.1653 0.6277
3.3235 0.4118
0.2376 0.0442
2.2941 0.2647
0.1230 0.0284
4.7353 6.4412
0.1693 0.1430
14.92
5.8529
0.1746
3.2647
0.0812
2.0000
0.0746
0.6618
0.0987
b778-ch19
0.2544
SPI-b778
7.1176
WSPC/Trim Size: 9.75in x 6.5in
21.45
14:45
Integrated product labor Development of work Availability of prime material Supply confidence Create alternative contract supplies Ease of product build
Build Building blocks Reservatories Logistics Degree of total PIIAAs Critical degree PIIAAs Critical degree PIIAAs Critical degree PIIAAs Critical degree influence assets of the assets of the assets of the Assets of the of IIA (%) (total) IIA–GLIIA(S1) (total) IIA–GLIIA(S2) (total) IIA–GLIIA(S3) (total) IIA–GLIIA(S4)
C. M. Dias Junior et al.
IIAs of MassFerro
Calculate Critical Degree of IIAs MassFerro.
March 15, 2010
472
Table 10.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
473
conductor team of the work tool by colleagues of PCPM, controlled and coordinated by the director of PCPM. With determination of degree of total influence of each IIA represented in the context of manufacture of MassFerro (still Step 9) how to mare consideration of PIIAA assets (> 0.1) obtained a degree of criticality of each one of the IIAs departmentalization. Shown in Table 9, it is clear that the IIAs with expressive contributions and uniform unit of manufacture, each follows development process of work and confidence of supply. In this way, suggesting again a connective vision for all those responsible sectors of PCPM of MassFerro, it was decided the IIAs “critical,” “more critical” and “less critical” (see Table 11) each one of the sectors of manufacture, to obtain the basis for the establishment of a critical application of resources on these assets in Stage 5. It needs to be said that the determination of the situational ponderance of weight relative to each IIA because of the analysis by Table 10, where are visualized the degree of contribution both total and marginal (sector) further still it was disconsidered the IIAs less critical by configure levels representing inferior to 0.1. You can still see that the sectors (building, building blocks) where the IIAs (labor integrated to the product, confidence of supply, and development of the work process) are determined as “more critical” by the PCMP of MassFerro, obtaining lower efficiency index operational, pre-supposing a relation directly present of these assets with an improvement in these same indexes. As shown in Table 11, we have a detailed vision of the contribution hierarchy in each IIA in the context of the manufacturer MassFerro (order of priority, second column) by means of consideration of the degree of contribution (GLIIA, third column) as in the critical situation (situation of IIA, fifth column) allocated financial resources in elements that compose these assets. 4.5. Stage 5 — Establishment for Criteria’s of Application of Resources in IIAs Critical As the basis of analysis of investments in IIAs and consequently the rationalization and utilization of other organizational assets, this stage determines priorities for the allocation of financial resources would be destined to create elements of these assets. Step 11 — Definition of the elements of the IIAs critical This step’s objective is the mapping of the elements of the IIA critical, starting from the determination of the weight relative to these elements in each sector of manufacture in a global format, obtaining a percentage as a representative sector in human resources, the processes of structure organization, and environmental factors responsible for the formation of these organization assets.
March 15, 2010
474
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
Table 11.
IIAs of MassFerro Creation of contracts for alternative suppliers Prime material availability Integrated product labor Prime material availability Supplier confidence Develop work process Develop work process Develop work process Integrated product labor Ease of build Supplier confidence Supplier confidence Prime material availability Creation of contracts for alternative suppliers Integrated product labor Supplier confidence Develop work process
Priority Order for IIAs MassFerro.
IIA (sector)
Situation of IIA
Situation weight of IIA
0.6277
Build
Critical
6
2
0.4283
Logistics
Critical
6
3
0.2544
Build
Most critical
9
4
0.2528
Building blocks
Critical
6
5
0.2376
Building blocks
Most critical
9
6
0.2354
Building blocks
Critical
6
7
0.2350
Build
Most critical
9
8
0.2189
Logistics
Critical
6
9
0.1842
Logistics
Critical
6
10 11
0.1746 0.1693
Build Logistics
Critical Critical
6 6
12
0.1653
Build
Critical
6
13
0.1514
Build
Critical
6
14
0.1430
Logistics
Critical
6
15
0.1288
Building blocks
Most critical
9
16
0.1230
Reservatories
Critical
6
17
0.1054
Reservatories
Critical
6
Order of priority
GLIIA p/sector
1
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
475
It is to be remembered that in the building block sector, the reserve sector and the logistics sector are still considered as GLIIAs (<0.1) and levels of criticality 3 (target) “less critical” by the calculation representative of weight relative to the total elements which compose each IIA. In the logistics sector, there is a significant weight of the IIA “labor integrated to the product and development of the work process” pointing to the need for a revision of the elements of management. It is understood the elements of the most representative and uniform ally present in the manufacturing of MassFerro are the “processes” and the environmental factors. Again, the importance of the percentage participation of the organizational structure and environmental factors for the logistics sector (both with a 31.18% participation) needs to be restated. Step 12 — Prioritization for the allocation or resources for elements molders of the IIAs criteria This step defines the priorities for the allocation of resources in elements of the IIAs analyzed by sector in the context of the internal manufacture of the company. This resulted in Step 11 reference percentage participation of each element that composes the assets. By consensus, the work tool decided initially team work only with the allocation of resources in IIAs considered to be “most critical” that more determined in Step 10 and considered relevant to the improvement of the efficiency indexes encountered in the building blocks. It also removed the necessity of investments in the elements that compose the IIAs in the logistics sector, whose participation of these assets was made important by the measure of individual contributions. A building of a priority matrix of allocation of resources in IIAs “most critical” this way the calculation of the MC (margins of contribution) of each element molder IIAs for the building sector, was • 1◦ — IIA (labor Integrated to the product) — 1◦ IIA “most critical” with a larger MC = 31, 45%, being the elements most influential for this IIA: – Human resources — MC = 20.61% – Processes — MC = 13.74% – Organizational structure — MC = 6.87% • 2◦ — IIA (development of the work process) — 2◦ IIA “most critical” with MC = 19.36% being elements most influential for this IIA: – Organizational structure and processes MC = 12.68% – Human resources — MC = 6.34% – Environmental factors — MC = 2.11% This way it was defined that the priorities of investment being available in Table 12 was still considering the total contribution of the elements that form the IIAs (labor integrated to the product and development of the work process).
Priority Elements for IIAs “Most Critical” Human resources
Priority Elements for IIAs “Most Critical”
Absenteeism
Reasonable (1.4%)
1.3%/Limit (1.6%)
Number of work accidents
Worst (6p/year)
Inclusion in work training programs
Reasonable (60 h/year/)
Increase production capacity
Reasonable (3 production teams)
Supplier qualification
No established program
Priority Elements for IIAs “Most Critical”
Yes/2% liquid profit (profit participation program) Zero/Limit (1p/year) Yes/R$ 100,000.00/year (employ 1 security work engineer) 80 h/ano/homem/Limit Yes/R$ 80,000.00 (90 h/year) 4 production teams/Limit (6 production teams) Future program implant
Priority Elements for IIAs “Most Critical” 3◦ 1◦ 2◦
Yes/R$ 4,000,000.00
4◦
Yes/R$ 600,000.00
5◦ (Continued)
b778-ch19
Priority Elements for IIAs “Most Critical”
SPI-b778
Organization structure
Priority Elements for IIAs “Most Critical”
WSPC/Trim Size: 9.75in x 6.5in
“Labor integrated in to the product and development of the work process”
14:45
Sector build
C. M. Dias Junior et al.
Matrix of Priorities of Allocation of Resources in IIAs in MassFerro — Sector Build.
March 15, 2010
476
Table 12.
March 15, 2010 14:45
Table 12.
(Continued)
Sector build
Processes
Priority Elements for IIAs “Most Critical”
Priority Elements for IIAs “Most Critical”
Priority Elements for IIAs “Most Critical”
Priority Elements for IIAs “Most Critical”
Consumed water
Reasonable (30.8 L)
30 L/Limit (35 L)
9◦
Consumed electric energy
Worst (40 kHz)
28.5 kHz/Limit (32 kHz)
Metal residues generated
Good (0.5 kg)
1 kg/day/Limit (1.2 kg)
Worst (70%)
Yes/R$ 1,000,000.00 (increase and reform of treatment station) Yes/R$ 600,000.00 (programs of sensibilization study of restructure and distribution of energy) No
7◦
8◦
—
b778-ch19
6◦
Reasonable (67%)
Higher than 95% Limit Yes/200,000.00 (internal logistics program) 70%/Limit 85% Yes/R$ 200,000.00
Build program attendance Index occupation of machines
SPI-b778
Environmental factors
Priority Elements for IIAs “Most Critical”
Operational Efficiency Management Tool Placing Resources
Priority Elements for IIAs “Most Critical”
WSPC/Trim Size: 9.75in x 6.5in
“Labor integrated to the product and development of the work process”
477
March 15, 2010
14:45
478
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
Table 12 clearly demonstrates the necessity of investments in the IIAs (most critical) “labor integrated to the product and development of the work process” for the sector of building of MassFerro, suggesting still a hierarchical of these same investments according to the perception of team conducting the work tool. Save that some of the investments proposed support the necessity of investments in the elements of the IIAs (most critical) in the building block sector. Taking as base MC percentage of the IIAs (most critical) for the building blocks sector obtained: 1◦ — IIA (development of the work process) — 1◦ IIA “most critical” with the largest MC = 31.47% being elements of human resources, organizational process structure — MC = 8.45% and environmental factors — MC = 12.83% for the elements organizational structures and process obtain MC = 2.13% and for elements of human resources — MC = zero. We have to consider still the MCs totals for each of the elements molders of the IIAs where “development of the work process and the confidence of supply” were respectively as follows: • • • •
Environmental factors — MC = 30.43% Processes — MC = 29.13% Organizational structure — MC = 21.80% Human resources — MC = 18.64%
In this way, the conducting team of the work tool established priorities of investments for the building block sector considering MCs of the elements molders of the IIAs where in the sector, by the way of the procedure before established for the building sector (see Table 13). The conclusions of the analyses and the information contained in Tables 12 and 13 will be prepared as a management resume by the conducting team of the management tool with the objective to orientate directors and stock holder of MassFerro about the importance of the IIAs to achieve better levels of operational efficiency which can equally increase the competitively of the portfolio of products represented by diverse P200 and P900 (SPPs) and by extension the product of diversive line and the other composed products. 5. Conclusions In front of the necessary need to suppress the data not available for this application, it is suggested to be consider the variable lucrative allied to the perspective of profit established by the referred to product positioning map of MassFerro, for a better visibility over the maintenance of the actual products and for development of new product lines. The other possibilities to consider in the determination of sector indexes of efficiency of manufacture of MassFerro, pre-supposing that this cost encomposses
March 15, 2010
Table 13.
Matrix de Priorities de Allocation of Resources in IIAs in MassFerro — Building Block Sector.
14:45
Building block sector “Development of the work process and the confidence of supply” Priority Elements for IIAs “Most Critical”
Priority Elements for IIAs “Most Critical” Yes/5% of profit (600,000.00)
1◦
No
—
Worst (50%)
70%/Limit (85%)
Reasonable (87%)
Larger than 95%/Limit larger than 90%
Paper and wood residues
Worst (6000 kg for a 3000 kg/month/Limit Yes/100,000.00 (develop month) (1000 kg for a commercial partners) month) Good (28 L) 33 L/Limit (38 L) No Reasonable 25.3 kw/Limit (32 kw) Include in the build sector (32.7 kw)
2◦
— —
Qualification of suppliers
No program
Program to be implanted
Yes/R$300,000.00
3◦
Human resources
Number of work accidents Number of work accidents with sickness leave Absenteeism
Worst (1.7%)
1.3%/Limit (1.6%)
—
Reasonable (2 for a year)
Zero/Limit (1 for a year)
Good (1%)
1.3%/Limit (1.6%)
Include in build sector investment Yes/R$10,000.00 for a year (contract technician in work security) No
4◦
—
479
Organizational structure
b778-ch19
Degree of automation Build program
Water consumption Electric consumption
Priority Elements for IIAs “Most Critical”
SPI-b778
Environmental factors
Priority Elements for IIAs “Most Critical”
WSPC/Trim Size: 9.75in x 6.5in
Processes
Priority Elements for IIAs “Most Critical”
Operational Efficiency Management Tool Placing Resources
Priority Elements for IIAs “Most Critical”
March 15, 2010
480
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
not only on material value lost as well as all the intangible external assets (image mark, etc.) and associated products. With the management tool application, it is more perceptible that the influences that the IIAs exercise on the rationalization and use of the other organizational assets essentially employees in the context of production of wealth. The following demonstrates that an improvement of indexes of operational efficiency is related to the correct utilization of the elements which they compose, ending in that the investment of the elements molders of these asset justify the measure in which we can identify them and determine their contribution value to the organization as demonstrated. After the application of the management’s tool, it can be observed that this represents an instrument capable of justifying the contribution of an business orientation looking at R&D of new products to maintain levels competitively in their portfolio and ultimately analyze a return on economic/financial significance. One of the principle purposes of the management tool consists of allocating financial resources relatively small to elements molders of the intangible internal assets compared to the economic value that these same assets represent. It can been seen how critical points of the management tool will in fact work as a perspective to calculate efficiency indexes operationally relative for sector defining logic of the context of production analyzed this actually orientates you to the operational efficiency indexes, total such as humans/productivity and utilization machine time. Therefore it is believable and necessary efficiency indexes which orientate sector objectives established for the manufacturer. The work tool will focus on products inside an approach of knowledge and valorization of IIAs. The most significant difficulty encountered is the approval of the specific performance indicators for product analysis. Therefore these difficulties were supplanted by way of inside strategic planning, in the phase of implantation which can orientate the management of IIA. Similarly, the other difficulty encountered was the limited availability of time on the part of the conductor team of the management tool to formulate collective perceptions in steps and stages where this was necessary causing eventually lateness and not compromising the results. Acknowledgement My thanks to Professor Ant´onio Caetano Monteiro Dr. at the University of Minho (Mechanical Engineering Department), Guimar˜aes, Portugal. References Adair, CB and BA Murray (1996). Revolu¸ca˜ o Total dos Processos: Estrat´egias Para Maximiza o Valor do Cliente. 247p, S˜ao Paulo: Nobel. Almeida (2003). Pedro Jorge Martins Borges de. Da capacidade empreendedora aos ativos intang´ıveis no processo de cria¸ca˜ o de empresas do conhecimento. Master Dissertation, 156p, Universidade T´ecnica de Lisboa, Instituto T´ecnico Superior, Lisboa.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
481
Antunes J´unior, JAV and M Lippel (2006). Uma abordagem metodol´ogica para o gerenciamento das restri¸co˜ es dos sistemas produtivos: a gest˜ao sistˆemica, unificada/integrada e voltada a resultados do posto de trabalho. . [14 may 2006]. Ballow, JJ, R Burgman and MJ Molnar (2004). Managing for shareholder value: Intangibles, future value and investment decisions. Journal of Business Strategy, 25(3), 26–34. Barbosa, JGP and JS Gomes (2003). Um estudo explorat´orio do controle gerencial de ativos e recursos intang´ıveis (capital intelectual) em empresas brasileiras. <www.anpad. org.br/enanpad/2001/dwn/enanpad2001-ccg-247.pdf>. [4 Nov. 2003]. Borgert, A (1999). Constru¸ca˜ o de um sistema de gest˜ao de produtos a` luz de uma metodologia construtivista multicrit´erio. Doctoral Dissertation of Programa de P´os-gradua¸ca˜ o em Engenharia de Produ¸ca˜ o e Sistemas, Universidade Federal de Santa Catarina, Florian´opolis. Cassapo, F (1990). Siemens: Caso. . Congr`es International de Coˆuts (2003). Goodwill — De la rouque. . [15 Aug. 2003]. Copeland, T, T Koller and J Murrin (2000). Valuation, University Edition: Measuring and Managing the Value of Companies, University Edition. Davis, S and C Meyer (1999). Blur: a velocidade da mudan¸ca na economia integrada, 188p. Rio de Janeiro: Campus. Day, GS (2001). A empresa orientada para o mercado: compreender, atrair e manter clientes valiosos. Tradu¸ca˜ o de Nivaldo Montingelli Junior, 265p. Porto Alegre: Bookman. Dias Junior, CM (2003). Proposta de detec¸ca˜ o de intang´ıveis do consumidor como forma de priorizar os investimentos em ativos intang´ıveis da organiza¸ca˜ o. Master Dissertation of Programa de P´os-gradua¸ca˜ o em Engenharia de Produ¸ca˜ o e Sistemas, Universidade Federal de Santa Catarina. Agosto. Dias Junior, CM and O Possamai (2003a). Um modelo a` propuls˜ao do valor organizacional e seus elementos constitutivos. Encontro Nacional de Engenharia de Produ¸ca˜ o: Ouro Preto. Diehl, CA (1997). Proposta de um sistema de avalia¸ca˜ o custos intang´ıveis. Disserta¸ca˜ o, Universidade Federal do Rio Grande do Sul, Porto Alegre. Doppegieter, J and M Zoller (2006). Managing intangible assets to leverage shareholder value. International University in Germany. School of Business Administration, Working Paper 48, Bruchsal, Germany. Edvinsson, L and MS Malone (1998). Capital Intelectual: Descobrindo o valor real de sua empresa pela identifica¸ca˜ o de seus valores internos, Tradu¸ca˜ o de Roberto Galman, 214p. S˜ao Paulo: Makron. Eustace, C (2000). The intangible economy impact and Policy Issues — Report of the European High Level Expert group on the intangible economy. Brussels. Fernandes, BHR (2004). Competˆencias e performance organizacional — um estudo emp´ırico. Doctoral Dissertation of Departamento de Administra¸ca˜ o da Faculdade de Economia, Administra¸ca˜ o e Contabilidade da Universidade de S˜ao Paulo, S˜ao Paulo. Firjam, E (2003). Gest˜ao de processos: como reinventar os processos de sua empresa para criar valor para os clientes. . [19 Aug. 2003].
March 15, 2010
482
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
Fleury, A and MTL Fleury (1997). Aprendizagem e inova¸ca˜ o organizacional. As experiˆencias de Jap˜ao, Cor´eia e Brasil, 2a Ed., S˜ao Paulo: Atlas. Fran¸ca, RB (2004). Avalia¸ca˜ o de indicadores de ativos intang´ıveis. Doctoral Dissertation of Programa de P´os-gradua¸ca˜ o em Engenharia de Produ¸ca˜ o e Sistemas, Universidade Federal de Santa Catarina. Florian´opolis. Gallego, I and L Rodr´iguez (2005). Situation of intangible assets in Spanish firms an empirical analysis, Journal of Intellectual Capital, 6(1), 105–126. Geremia, CF (2001). Desenvolvimento de um programa de gest˜ao voltado a` manuten¸ca˜ o das m´aquinas e equipamentos e ao melhoramento dos processos de manufatura fundamentado nos princ´ıpios b´asicos do Total Productive Maintenance (TPM). Professional Master in Engenharia, Escola de Engenharia, Universidade Federal do Rio Grande do Sul, Porto Alegre. Green, A (2007). Intangible assets in plain business language, The Journal of Information and Knowledge Management Systems (Emerald Group Publishing Limited), 37(3), 238–248. Heirman, A and B Clarysse (2007). Which tangible and intangible assets matter for innovation speed in start-ups? The Journal of Product Innovation Management, 24, 303–315. Hoss, O (2003). Modelo de avalia¸ca˜ o de ativos intang´ıveis para institui¸co˜ es de ensino superior privado. Doctoral Dissertation of Programa de P´os-gradua¸ca˜ o em Engenharia de Produ¸ca˜ o e Sistemas, Universidade Federal de Santa Catarina. Florian´opolis. Iud´icibus, S (1997). Teoria da contabilidade, 330p. S˜ao Paulo: Atlas. Jugdev, K and G Mathur (2006). A Factor Analysis or Tangible and Intangible Project Management Assets. Project Management Institute. Kaplan, RS and DP Norton (2004). Mapas estrat´egicos—Balanced Scorecard: convertendo ativos intang´ıveis em resultados tang´ıveis, 471p. Rio de Janeiro: Elsevier. Kaplan, RS and DP Norton (2004). Medindo a prontid˜ao estrat´egica dos ativos intang´ıveis. Harvard Business Review, <www.prodepa.psi.br/sqp/pdf/hbr-kaplanmedindo-a-prontidao-estrategica-de-ativos-intangiveis.pdf>. Kayo, EK (2002).A estrutura de capital e o risco das empresas intensivas de capital tang´ıvel e intang´ıvel: uma contribui¸ca˜ o ao estudo da valora¸ca˜ o de empresas. Doctoral Dissertation of Faculdade de Economia, Administra¸ca˜ o e Contabilidade, Universidade de S˜ao Paulo, Novembro. Lev, B (2001). Intangible Assets: Management, Measurement and Reporting, 150p. United States: Brookings Institution Press. Lev, B. Ativos intang´ıveis: O que vem agora? . Lev, B and JH Daum (2004). The dominance of intangible assets: Consequences for enterprise management and corporate reporting. Measuring Business Excellence, 8(1), 6–17. Low, J and P Kalafut (2009). Gerenciando intang´ıveis. <www.abraman.org.br/biblioteca e publica¸coes/quality 23a.PDF>. Marr, B (2007). Measuring and managing intangible value drivers. Business Strategy Series, 8(3), 172–178. Marr, B, G Schiuma and A Neely (2004). The dynamics of value creation: Mapping your intellectual performance drivers. Journal of Intellectual Capital, 5(2), 312–325.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
483
Martin, J (1996). A grande transi¸ca˜ o — usando as sete disciplinas da engenharia da empresa para reorganizar pessoas, tecnologia e estrat´egia. S˜ao Paulo: Futura. Martins, E (1972). Contribui¸ca˜ o a` avalia¸ca˜ o do ativo intang´ıvel. USP, 1972, 109p. Doctoral Dissertation of Faculdade de Economia, Administra¸ca˜ o e Contabilidade, Departamento de Contabilidade, S˜ao Paulo. Martins, RA (1999). Sistemas de medi¸ca˜ o do desempenho: um modelo para estrutura¸ca˜ o do uso. Doctoral Dissertation of Escola Polit´ecnica de S˜ao Paulo, Departamento de Engenharia de Produ¸ca˜ o, S˜ao Paulo. Mathur, G, K Jugdev and TS Fung (2007). Intangible project management assets as determinants of competitive advantage. Management Research News, 30(7), 460–475. Nunes, G and D Haigh (2003). Marca: valor do intang´ıvel—medindo e gerenciando seu valor econˆomico, 276p. S˜ao Paulo: Atlas. Oliveira, ABS (1999). Contribui¸ca˜ o de modelo decis´orio para intang´ıveis por atividade— uma abordagem de gest˜ao econˆomica. USP, 1999, 196p. Doctoral Dissertation of Faculdade de Economia, Administra¸ca˜ o e Contabilidade, Departamento de Contabilidade e Atu´aria, S˜ao Paulo. Pandolfi, M (2005). Sistemas de medi¸ca˜ o e avalia¸ca˜ o de desempenho organizacional: contribui¸ca˜ o para gest˜ao de metas globais de performances individuais. Doctoral Dissertation of Escola Polit´ecnica de S˜ao Paulo, Departamento de Engenharia de Produ¸ca˜ o, S˜ao Paulo. Pe˜na, DN and VRL Ruiz (2002). El capital intelectual: valoraci´on y medici´on, 246p. Spain: Financial Times-Prentice Hall. Perez, RL (2003). Sistematiza¸ca˜ o da avalia¸ca˜ o do desempenho do processo de projeto do produto. Master Dissertation of Programa de P´os-gradua¸ca˜ o em Engenharia Mecˆanica, Universidade Federal de Santa Catarina, Florian´opolis. Pike, S, G Roos and Marr (2005). Strategic management of intangible assets and value drivers in R&D organizations. R&D Management, 2(35), 111–124. Reis, EA (2002). Valor da empresa e resultado econˆomico em ambientes de m´ultiplos ativos intang´ıveis: uma abordagem de gest˜ao econˆomica. Doctoral Dissertation of Universidade de S˜ao Paulo, Faculdade de Economia, Administra¸ca˜ o e Contabilidade, Departamento de Contabilidade e Atu´aria. S˜ao Paulo. Rezende, JFC (2003). Balanced Scorecard e a gest˜ao do Capital Intelectual: alcan¸cando a mensura¸ca˜ o equilibrada na economia do conhecimento, 304p. Rio de Janeiro: Campus. Rocha, JS (2003). Matriz de verifica¸ca˜ o de direcionadores de valor para gest˜ao do valor criado. Doctoral Dissertation of Programa de P´os-gradua¸ca˜ o em Engenharia de Produ¸ca˜ o e Sistemas, Universidade Federal de Santa Catarina. Florian´opolis. Rossatto, MA (2002). Gest˜ao do Conhecimento: a busca da humaniza¸ca˜ o, transparˆencia, socializa¸ca˜ o e valoriza¸ca˜ o do intang´ıvel, 264p. Rio de Janeiro: Editora Saraiva. Santos, EM and EO Pamplona (2002). Teoria das Op¸co˜ es Reais: aplica¸ca˜ o em pesquisa e desenvolvimento (P&D). 2◦ Encontro Brasileiro de Finan¸cas. Ibmec. Rio de Janeiro, July of 2002. Santos, EM and EO Pamplona (2003). Qual o valor de um projeto de pesquisa? Uma ´ compara¸ca˜ o entre os m´etodos de Op¸co˜ es Reais, Arvore de Decis˜ao e VPL Tradicional na determina¸ca˜ o do valor de um projeto real de pesquisa e desenvolvimento (P&D). 3˚ Encontro Brasileiro de Finan¸cas. FEA/USP. S˜ao Paulo.
March 15, 2010
484
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
C. M. Dias Junior et al.
Santos, JL and P Schmidt (2002). Ativos intang´ıveis — an´alise das principais altera¸co˜ es introduzidas pelos FAS 141 e 142. . IX Congresso Brasileiro de Custos, [27 July 2004]. Silva, CES (2001). M´etodo para avalia¸ca˜ o do desempenho do processo de desenvolvimento de produtos. Doctoral Dissertation of Programa de P´os-gradua¸ca˜ o em Engenharia de Produ¸ca˜ o e Sistemas, UFSC. Florian´opolis. Silva, CES and M Fiod Neto (1999). Planejamento da Engenharia Simultˆanea. II SIMPOI. Funda¸ca˜ o Get´ulio Vargas. S˜ao Paulo. Standfield, K (2002). Intangible Management: Tools for Solving the Accounting and Management Crisis, 261p, Fl´orida: Academic Press. Starsys Consultoria (2003). Re-alinhamento de processos. . [19 Aug. 2003]. Stewart, TA (1998). Capital Intelectual: a nova vantagem, competitiva das empresas. Tradu¸ca˜ o de Ana Beatriz Rodrigues, Priscilla Martins Celeste, 2a edi¸ca˜ o, 237p, Rio de Janeiro: Campus. Sveiby, KE (1998). A nova riqueza das organiza¸co˜ es: gerenciando e avaliando patrimˆonios de conhecimento. Tradu¸ca˜ o de Luiz Euclydes Trindade Fraz˜ao Filho. 260p, Rio de Janeiro: Editora Campus. Teixeira, RNC (2005). Desenvolvimento de um modelo para o planejamento de investimentos em flexibilidade de manufatura em situa¸co˜ es de mudan¸cas estrat´egicas da organiza¸ca˜ o. Doctoral Dissertation of Programa de P´os-gradua¸ca˜ o em Engenharia de Produ¸ca˜ o e Sistemas, Universidade Federal de Santa Catarina, Florian´opolis. Thiel, ETE (2002). Proposta de modelo de implanta¸ca˜ o de um projeto de gest˜ao do conhecimento combase em processos organizacionais. Master Dissertation of Programa de P´os-gradua¸ca˜ o em Engenharia de Produ¸ca˜ o e Sistemas, Universidade Federal de Santa Catarina, Florian´opolis. Wernek, R (2002). Identifica¸ca˜ o de potenciais intang´ıveis. Doctoral Dissertation of Programa de P´os-gradua¸ca˜ o em Engenharia de Produ¸ca˜ o e Sistemas, Universidade Federal de Santa Catarina, Florian´opolis. Williams, JR, KG Stanga and WW Holder (1989). Intermediate Accounting (Jovanovich Publishers Florida Hartcourt Brace, USA).
Biographical Notes Claudelino Martins Dias Junior did his graduation in Business Management from URCAMP (Universidade da Regi˜ao da Campanha), Brazil (1994). He specialized in Management of the Enterprise Resources from UFSM (Universidade Federal de Santa Maria), Brazil (1995/06). He received his Master’s in Production Engineering from UFSC (Universidade Federal de Santa Catarina), Brazil (2003) and PhD in Production Engineering from UFSC, Brazil (2008) with co-guardianship at Electrotechnical Engineering period in UNL (Universidade Nova de Lisboa), Portugal (2006/07). He has experience in Production Engineering, focusing on Product Engineering, acting on the following subjects: intangible assets, value, quality, consumer and services.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
Operational Efficiency Management Tool Placing Resources
485
Osmar Possamai received his graduation in Mechanical Engineering from Universidade Federal de Santa Maria, Brazil (1982), Master’s in Mechanical Engineering from Universidade Federal de Santa Catarina, Brazil (1985) and PhD. in Genie Mechanique from Universit´e de Technologie de Compi`egne, France (1990). He has experience in Production Engineering, Precision Engineering, focusing on Product Engineering. Ricardo Jardim-Goncalves holds a PhD in Industrial Information Systems from the New University of Lisbon. He is an Auxiliar Professor at the New University of Lisbon, Faculty of Sciences and Technology, and Senior Researcher at UNINOVA institute. He graduated in Computer Science, with MSc in Operational Research and Systems Engineering. His research activities have focused on Standard-based Intelligent Integration Frameworks for Interoperability, covering architectures, methodologies and toolkits to support improved development, harmonization and implementation of standards for data exchange in industry, from design to e-business. He has been a technical international project leader for more than 10 years, with more than 50 papers published in conferences, journals and books. He is now a project leader in ISO TC184/SC4.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch19
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Chapter 20
Interactive Technology Maps for Strategic Planning and Research Directions Based on Textual and Citation Analysis of Patents ELISABETTA SANI∗ , EMANUELE RUFFALDI† and MASSIMO BERGAMASCO‡ Piazza Martiri Libertà 33, 56100 Pisa, Italy ∗ [email protected] † [email protected] ‡ [email protected]
Innovation management and research direction planning are challenged by the understanding of the technology space in which a research entity is placed. In order to succeed in the market, a given institution has to understand the relationships of its patent portfolio with the respect to competitors. Patent databases provide useful information for exploring a given technology area, but only when such details are being properly analyzed are they able to provide fruitful insights. The approach of technology trajectories can be extended toward a more intuitive technology map. This work presents an interactive approach to the exploration of technology space based on patents. Patents are analyzed using citation graphs, integrated with textual analysis. The technology map presents the technology topics on a landscape displaying the patent density, and the main actors in the area. The map is interactively displayed allowing the user to focus on specific patents or companies. Finally, a specific case study is provided presenting the application of this approach focusing on the area of medical, and virtual reality devices. Keywords: Research directions; patents; technology trajectory; technology landscape; citation analysis; visualization.
1. Introduction Innovation is seen as universally good and, in general terms, there are many types of innovation. It can be considered as a technology, product or process and classified as incremental or radical (Daft and Becker, 1978), architectural or modular (Magnusson et al., 2003), continuous or discontinuous (Corso and Pellegrini, 2007), creative or destructive (Thomas, 2007), sustaining or disruptive (Christensen, 1997). ‡ PERCRO Laboratory, Scuola Superiore Sant’Anna.
487
March 15, 2010
488
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
Few recent reports addressing the importance of technology’s strategy in predicting innovation and success of organizations (Hayes and Abernathy, 2007; Charnes et al., 1978; Ettlie, 1983), contribute to a better definition of the context. The kind of innovation this chapter focuses on is research innovation, which is the most complex. Research innovation is indeed fundamentally long term and uncertain in terms of technology, education and market risks. On the other hand, it is well known that the new product development (NPD) process is a time-consuming and costly component. At the concept phase, there is an infinite amount of execution risks in the R&D process. It is completely unclear what it is working on and there is no definition of how the proposed technology application can impact the market since in many cases the market does not even exist. Nowadays, for a firm to lead the market and take advantage of its positioning, it has to exploit the same information about technology that are shared among other competitors. Both researchers and entrepreneurs are challenged to become “architects of information”, able to see problems and integrate solutions from multiple perspectives. The mission for research innovation is to create technology that is going to reinvent prior businesses and create new ones. Private agents standing out of the market and profit-seeking parties allocate specific resources for exploring and developing new products or technologies that they believe are promising. If they succeed in doing it, their production costs, market position, competitors, and clients will be affected by these innovative techniques. In some cases, if things go negatively and the consequences may be severe. From a business point of view, the R&D process is a risky and expensive activity, that may or may not provide exploitable results; it is undertaken especially when it is not strategically wise to achieve the same technology with different policies (licensing, cross-licensing, etc.). Consequently, the development and use of specific tools that can manage the information related to the research context in which a firm is placed are of great interest. Given this, there is a lot of sense in considering information technology as an enabling tool for improving the effectiveness of management operations and management strategic decisions. After all, what is information? The term information is used very broadly. Essentially, anything that can be digitized or encoded as a stream of bits can be considered as information.As a matter of definition, information means the knowledge derived from study and experience. It is the result of processing, gathering, manipulating, and organizing data in a way that improves the knowledge of receivers. Basically, a piece of information reduces uncertainty and may change the observer’s context and point of view. Specifically, what types of business information are strategically relevant to entrepreneurs? It is interesting to focus on the value of information to different consumers. For most businesses, there are a huge variety of requirements for information. Senior
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
489
managers, for instance, need information to help with their business planning. Middle management instead needs more detailed information to help them monitor and control all business activities. Finally, employees who are in charge of operational tasks need information to help them carry out their duties. One would easily say that, as a result, business tends to have several information systems operating at the same time. This chapter is mainly focused on describing tools useful to the top management, who is in charge of managing strategic planning. As a matter of fact, business information comes from many and multiple sources both internal and external to a firm. The main challenge for entrepreneurs is to capture and use information that is pertinent and reliable, avoiding time lost looking for data that do not provide added value. The ability to exploit external information represents a critical component of innovative capabilities. Such ability claims basic skills, global language sharing (Pan and Leidner, 2003) and should also include knowledge of the most recent scientific or technological developments in a given field (Cohen et al., 1987). This knowledge confers the capability to recognize the value of new information, to appropriate it, and to make an use of it for commercial purposes (Cohen and Levinthal, 2000). Based on what has been highlighted upto now, it is worth mentioning the key role played by business information in the decision-making process. Given that it is difficult to predict the future success and impact of potential projects, but also that the decision-making process is a multi-stage team process involving a group of decision makers (Schmidt and Freeland, 1992), the task of managing decisions can become very complicated (Tian et al., 2005). This may occur especially when decision makers do not share the same strategies (Ghasemzadeh and Archer, 2000; Henriksen and Traynor, 1999), or even main goals. Over the past decades, a great number of decision models and methods have been developed and adopted to help organizations make appropriate decisions in R&D projects selection. Different models, mainly related to the mathematical programming and optimization rather than ones focused on decision analysis or economic models, have been deeply implemented. However, although several studies prove interesting results, recent analysis underscores that decision models are not easily adopted by entrepreneurs even if they are customized. Nowadays, the concern is not information access but information overload. Not all the available information is useful and needs to be taken into account. The real value of information comes in placing, filtering, and communicating what is useful with respect to the entrepreneur’s interest. What is strategically relevant is the possibility to manipulate reliable information, not the total amount of information available. In a competitive market, it is expected that the main actors will aim to make their new research topics unknown; this should allow them to assault the related niche alone or at least limit the number of potential competitors. New research
March 15, 2010
490
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
paths, as well as new business ideas, have been passed over due to silence by the respective innovators. Indeed, although information is placed everywhere in the free market, there are attempts to maintain their hidden status to prevent direct competitors from gaining knowledge about it. Some forms of “privatization” of information helps to ensure its production. For example, the US Constitution grants Congress the rights “to promote the progress of science and useful arts, by securing, for limited times, to authors and inventors, the exclusive right to their respective writings and discoveries” (Shapiro and Varian, 1999). The grant of rights to intellectual property through patents, copyright, and trademarks, although legally recognized, does not award complete power to control information. The analysis and assessment of progress in technological innovation through patents is a field that has captured the interest of many researchers. Historically, patent information analysis has been performed as part of the patent publication process, or through the planning and preparation for patent litigations, but recently patents information have been more commonly used to identify the major changes and trends of a particular technological field. This work focuses on technology innovation management and research directions planning as an interesting tool for a strategic understanding of the technology space of given entities. The focus is placed on patents which represent interesting containers of information. The following paragraph addresses a brief literature review of the state-of-theart related to patent analysis, and the recent advancements of such an analysis for the decision-making process. Successively, an overview on technology trajectories and technology maps, describing in detail the methodology adopted in this work for analyzing patents data, follows. The subsequent paragraph addresses the interactive visualization of technology maps, that can be used for understanding the technology landscape and take decisions on future directions. Such a map could be visualized not only by using a normal display screen but also by taking advantage of virtual environment technology for improving the understanding and interpretation of the data. For providing a concrete analysis, the work proposes a case study detailing the application of the road mapping process on specific fields of technology mainly related to medical and virtual reality devices. 2. State-of-the-Art In the context of fast-paced technologies and increasing competition in the research market, the strategic management of the intellectual property and the integrated selection of the research directions are complex aspects to be considered for a winning strategy.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
491
One of the main result of industrial research is the patent, being the official and formal way of innovation protection. In this context, the patent portfolio of a company represents a key asset especially for high technology companies, and it can be adopted for several business goals such as keeping market position, supporting profitable agreements, protecting current rather than future products (Carson et al., 2004). In general terms, the patent portfolio can be considered the most effective mechanism adopted by firms for protecting profits and securing financing (Cohen et al., 2000; Hall, 2004). As a matter of definition, a patent document can be defined as a temporary legal monopoly granted to inventors for the commercial use of an invention (Hall et al., 2000). The conventional wisdom among economists, intellectual property lawyers, public officials, and many lay people is that a patent does not give one a monopoly in the sense of having unrestricted rights to practice the protected invention. The patent gives one just the right to keep, make, use, offer for sale or sell the invention (McFetridge and Rafiquzzaman, 1986), by providing the right to stop others from violating the exclusionary rights. By creating barriers to entry for competitors and maintaining an exclusive market space (i.e., patent blocking), an effective patent portfolio can help to discourage the competition in the market place (Harrison and Rivette, 1998; Lin et al., 2006). The substantial benefits deriving from a patent portfolio, however, may also be the revenues through technology sharing and patent license agreements. Once it has been granted, the patent document contains a wealth of detailed information all represented in standardized form (Meyer et al., 2003). It contains technological antecedents of the inventions identified as references and citations to previous patents, and published material identifying the prior art and delimiting the property right that a patent represents. The information contained in a patent document has more detail concerning technology than any other scientific technical publication (Hong, 2006; Macdonald, 1998).According to a recent study, some 70% of the information disclosed in patents is never found anywhere else (esp@cenet, 2005; van Dulken, 1999). Considering the huge quantity of information stored in patents, it is worthwhile to start thinking about the use of such information for a better understanding of a given technological space. To come up with competitive new products and processes, industries need to have access to detailed information on technological innovations that it has to compete with. One major method of obtaining such information could be an analysis of patent data. Since such data might signal the kinds of product and processes foreign or from the same country companies are planning to introduce, an analysis of patent data could provide firms with information that could help them with their strategic planning efforts (Abraham and Moitra, 2001). With the arrival of improved quality data and easier access to it, patent and patent citation analysis are topics which have been receiving an increasing level of attention (Michel and Bettels, 2001); patent citation analysis is a recent development which makes use of bibliometric techniques to analyze patent citation information (Karki, 1997).
March 15, 2010
492
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
The tools for managing intellectual property information have improved, both in qualitative and quantitative aspects. These tools allow one to retrieve patents and search among the relationships of patents, but they do not actually provide a detailed content and cross-referenced based search. The opportunity offered by a more intelligent text analysis algorithm, integrated with domain knowledge bases, allows for the creation of new tools for the analysis of IP material (Camus and Brancaleon, 2003). These new tools, based on citation networks and semantic analysis, allow the discovery of relationships between patents and new patents previously hidden, as it has been similarly possible in bibliometrics (Janssens, 2007). Some analysis tools are already available for helping and structuring the clear placement of the company with respect to the outside market, as well as for identifying future directions, based on the information sourcing in public database of patents (Jenkins and Floyd, 2001; Trippe, 2003). Nevertheless, there is space for more interesting analysis. As pointed out by other researchers, since inventors probably limit their patent applications to their more successful inventions, patents presumably represent only the higher tops on conceivable and potential landscapes (Fleming and Sorenson, 2001). Since the number of citations a patent receives is strictly and positively correlated with its technological relevance (Hall et al., 2000; Albert et al., 1991) and may also be tightly connected with its social value, representing a measure of the inventive usefulness across heterogeneous and different technologies fields (Fleming et al., 2004) the work performed takes it into consideration. Indeed, the proposed analysis that focuses on patent and patent citations as a measurement of the current trends of a specific technological scientific innovation is useful also for analyzing specific technologies niches (Sani et al., 2007). Exhaustive surveys are mainly focused on the adoption of innovative technologies, and the related economic impact, without identifying the primary causes of such innovation (Rosenberg, 1982) or identifying critical and detailed path of analysis. Nevertheless, recent studies try to verify the theory of invention (Fleming and Sorenson, 2001) following Kauffman’s work (Kauffman, 1993) based on evolutionary biology. Kauffman focused his analysis on the role of complexity in adaptive systems assuming the existence of a landscape, in which superior organisms can seek superior levels of biological fitness. This metaphor means that, the stronger an innovative technology is on a market, the higher the likelihood it will establish fruitful collaborations and agreements with third parties.Along the these lines, specific studies argue that technologies follow a sort of “technological life-cycle” (Abernathy and Utterback, 1978) and further conclude that technologies go beyond different periods of equilibrium broken in unsteady ones (Gould and Eldredge, 1977). There is a great interest for companies to understand the possible directions of the research and evaluation of the different solutions based on the research results obtained by competitors and public research institutions. These different solutions live in a technology map that is a conceptual extension of technological trajectories (Dosi, 1982) and knowledge domains (Schmoch and Grupp, 1992).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
493
From a definitional point of view, a technological trajectory can be defined as various paths that a technological area has followed after having evaluated the related technological trade-offs (Saviotti and Metcalfe, 1984). The concept of technology trajectories (Dosi, 1982) formalizes and presents the major lines of the evolution of technologies; a technology trajectory can indeed be represented as a path that, among the various possible paths, has been successful in a given area along time. This type of analysis has been successfully applied to a variety of fields, and it has been primarily based on the use of patent citations (Helo, 2003; Verspagen, 2007; Martinelli et al., 2008). Knowledge domains, on the other hand, consist of a visualization of specific topics associated to a set of documents, being patents, publications or reports, for instance (Caldwell et al., 2005). The insights provided by technology trajectories are extremely useful for the understanding of the general trend of a technological field, but for a deeper analysis, it is necessary to use more advanced types of visualizations. Specifically, technology maps present the major relationships between technologies and the positions of different competitors in this landscape (Yeap et al., 2003). The proposed work bases the innovation management approach on the creation of three dimensional technology maps allow to one understand the position of a company with the respect to current trends of technologies, as they can be obtained from patents information.
3. Methodology The importance of patent analysis and the recent advancements of such an analysis for decision making have been previously discussed. In this section, a methodology of data analysis focused on patents that allows the user to generate interactive visualization of technology maps, that can be used for understanding the technology landscape and take decisions on future directions is presented. The process, starting from basic data to the visualization is presented by introducing a patent pipeline that organizes the data analysis in phases. Then, it will be discussed along the lines of how patent citations can be introduced in the pipeline for extracting important information and relationships. The citations will then be complemented by textual analysis on the patents’ text. Finally, the visualization aspect will be addressed, including specific issues related to the management of large graphs and enhancing such plots with interaction. Before discussing the analysis, it is important to present the information associated with a specific patent p ∈ P (P being the patent set) distinguishing between textual entities and structural entities: • pN is the unique identifier of the patent, made of the two letters for the nationality followed by the identification number (e.g., US123876);
March 15, 2010
494
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
• pPRI is the priority year of the patent, that is, the date in which the first version of the patent was submitted; • pPUB is the publication year of the patent, that is, when it was made publicly available; • pIPC is the list of IPC (International Patent Classification) categories that provide a hierarchical system of language-independent symbols for the classification of patents and utility models according to the different areas of technology to which they pertain; • pREF is the list of patents referenced by pi as citations; • pINV is the list of inventors; • pASS is the list of assignees; • pT is the title of the patent; • pABS is the abstract of the patent; • pDES is the description of the patent and • pCLAM is the full text of the patent’s claims. The patent analysis pipeline is a sequence of operations that is adopted for moving from the information available in public databases into something that can be useful for obtaining insights. The first phase of the pipeline is the construction of the patent set P. This set contains the patents that will be analyzed in the following phases, and it is the focus of the current research. There are several approaches for constructing such sets; sometimes, it is feasible to start from a group of known patents in a given area, and perform a broad search using information associated with these patents, like categories or specific keywords. Other times, the search starts with keywords inside a given IPC category. In other cases, the objective is to start from a group of well-known companies in a given area. Finally, from the above criteria, it is possible to extend the patent set using the cited patents, moving backward in time. Apart from the selection of the initial starting point, the above operations are performed mostly automatically using the available databases. In particular, the presented work makes use of the USPTO database (The Department of Commerce’s United States Patent and Trademark Office) and EPO database (European Patent Office). It is just a technicality due to the fact that the above searches need to be performed with specialized crawling tools that automatically perform the queries and analyze the information. Once the patent set has been obtained, it is important to normalize the contained data and to aggregate the duplicate entries. In many cases, the same patent is found in multiple versions or databases, or the broad search provides results in patents belonging to the same family. After that, it is assumed that the patent set remains unmodified, and for this reason, every patent p is associated with an index in the patent set and simply referred to as pI . In information retrieval terminology, the patent is equivalent to a document and the patent set is the database P = (p1 , p2 , p3 , . . . , pN ).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
495
The patent set obtained at this stage can be analyzed using two different approaches. The first is based on the textual information, with some help from the structural information, using the standard techniques from data mining and information retrieval. The second instead uses only the citations with the objective of understanding the importance and the relationships between patents. The presented approaches are complementary: the former does not provide a clear understanding of the importance of a given patent, while the latter provides more on the importance of being cited without specific details on the patent’s topic. Both approaches will be presented, starting from the citations, followed by a discussion on how it is possible to integrate the information derived from them. Figure 1 shows the major characteristics of the two approaches. 3.1. Citation Analysis The citation information is extremely valuable for understanding the quality of patents and it is fundamental when there is no information available regarding the licensing outcomes of the patent. Citations have a strong economic motivation
Figure 1. Structure of the proposed approach with textual and citation-based analysis. The textual analysis uses latent semantic analysis for the identification of term-based clusters. The citation analysis on the other side takes advantage of the relationships among patents for extracting trends.
March 15, 2010
496
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
since they allow are to explicitly place the patent with respect to prior knowledge for reducing the possibilities of litigations. These citations are selected not only by the inventors during the drafting process, but also suggested by the reviewers during the evaluation and granting process for positioning the patent with respect to the other patents. Citations in patents have a different role with respect to the ones in scientific publications, that they are important for presenting the novelty of the work by Eysenbach (2006) and at the same time, allow the reader to understand the research context. The result of the citations between patents is a network of citations that can be used for understanding the flow of information along time. The natural structure that can be used for representing such a network is the directed graph; in particular, it is common to show the graph in terms of flow of information, that is with the arrows reversed respect the citation graph. The graph GC = (P, EC ) is a directed unweighted graph, with patent set P and a set of edges EC of information flow: {p1 , p2 } ∈ EC when p2 cites p1 . This graph is not weighted because citations have no information associated, although it could be possible to use the difference between the priority pPRI years of the two patents for weighting the references. In theory, this difference is always positive or zero, with the effect of making cycles impossible, that is the citation graph is an Direct Acyclic Graph. In reality, patent information is not normalized and it is possible to find these types of citations to the future, introducing cycles. A typical representation for graphs is the adjacency matrix, that is a square matrix A of elements aij , that has a value of 1 when information flows from patent i to patent j, and given the relationships among patents it is quite sparse. This matrix REF , there could be some patents is built using the list of citations pREF i . In the list pi not present in the patent set; this is may be due to the fact that some patents are too old with respect to the search criteria, or in some cases too old to be represented in digital form. The graph structure allows one to perform a variety of analysis taken from Graph Theory. In particular, it is possible to identify independent components of the graph gj , that are separate patent subsets that have no relationship to each other. The algorithms discussed in this section will be applied on single components of the graph, discarding, in addition, isolated patents. Once the graph is displayed, it is possible to visualize both the major areas of aggregation and the major directions. Such a visualization represents something that humans are very good at performing, using both pattern recognition capabilities and multiscale evaluation. The focus of the proposed research is the identification of the major areas of a technology map. For a complete interpretation it could be useful a tool for understanding the major lines in the graph. As a matter of nomenclature, the term trajectory is a sequence of patents connected together over the graph to form a path of information τ = p1 p2 · · ·pk . A technology trajectory (Dosi, 1982) is such a sequence of patents that are important for describing the evolution of the technology along time.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
497
One of the main objectives of this analysis is the identification of the main trajectories that are the most important flows of information in the citation graph. This approach has been initially introduced by Verspagen (2007) with an application on fuel cell research. A detailed description of such an algorithm will follow since it is the foundation of the presented approach. This algorithm differs from other techniques for weighting the nodes of a graph based on the number of citations, like PageRank (Page et al., 1998), because this one focuses on the identification of trajectories of citations from sources to targets, while others on the importance of single node. PageRank has been recently applied to bibliometric analysis (Fiala et al., 2008), taking into consideration both citations and co-authorship in publications. The main trajectory in the patent space is obtained first from the computation of the importance of edges connecting two nodes in the obtained graph. The Search Path Link Count (SPLC) (Hummon and Doreian, 1989) is an example of a method for computing such a value, and corresponds to the identification of the most populated roads in a city. The SPLC algorithm starts from two special sets of patents in the original set, the sources and the targets. A patent is considered a source if the information only flows from it, that is a patent that introduces a new type of technology {p ∈ P : ¬∃q ∈ P such that {q, p} ∈ EC }. In the adjacency matrix A, a patent pi is a source if aij = 0 for every patent pj . On the other side, a target patent receives only information corresponding to the fact that it is not cited by other patents in the patent set, {p ∈ P : ¬∃q ∈ P that {p, q} ∈ EC }. A patent can be a target if it is a technology that has not been later investigated, or if it is on the time border of the patent set, possibly receiving citations from new patents. The SPLC is based on the idea of computing a weight matrix for every edge in the graph: the weight of an edge will be increased by one for every path connecting every source with every target. This all-paths operation is different from the singlepair shortest path problem, because it tries to identify importance of the connections. The result is the directed weighted graph GS = (P, ES ) in which the set edges ES are weighted by the number of link counts. When the SPLC has been computed, it is possible to search for the main paths connecting sources with targets, and in particular, the maximum value of the edges pointing out a node will be investigated. In the algorithm below, the SPLC matrix is indicated as eij while the maximal edge for every node is emi = MAX(eij ). The algorithm for the computation of the main trajectory is the following and the result is a list of trajectories passing by the most important nodes: (1) select the sources Sx that have the maximum emi edge value; create initial trajectories with Sx ; (2) for every node j in Sx select the descendant nodes Dj whose edges have maximum value; (3) extend the trajectories with the nodes Dj eventually bifurcating the trajectory; (4) continue from the point 2, using all the Dj as the new Sx .
March 15, 2010
498
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
Among all the main trajectories, the top one is the longest one and corresponds to the most important connection among sources and targets. The top trajectory corresponds to the structural elements of the selected patent set, and then it is possible to put in relationships the other patents with respect to the top trajectory. In particular, this relationship is introduced in the form of the distance of a given path relative to a trajectory. This distance can be computed by starting from the nodes in the trajectory and extending it by following the inverse knowledge flow that follows the citation graph. A node with a big distance corresponds indeed to something that has provided the grounding work in the area, although it is not in the major flow of information. In addition, this operation will probably leave out many nodes because only the citation graph is followed. An example of application of this algorithm is here presented, while more detailed analysis will be performed in the last section of this chapter. In particular, a set of patents from the area of rehabilitation is considered. For this example, it has been selected a patent set obtained from USPTO and esp@cenet databases by querying the IPC category A61 for all the years, and then by extending the search using the citations of the patents from the first group. In this way, the original set of 622 patents is extended to 10,463 elements. In this set, there are 100 nodes that are isolated, while the number of independent components are 38, in which the two biggest are sized 10,335 and 22, respectively. For this current discussion, only the biggest connected component is referred. Finally, in this component, there are 3892 sources and 290 targets. It is important to note that the SPLC algorithm can be applied to the whole graph, and then the trajectory identification will be limited only to the sources in the selected component. The first three main trajectories that can be extracted from the above patent set, after the application of SPLC, are interesting. The first is composed of 13 elements, all related to implantable hearing aids ranging from 1973 to 2001, while the second is characterized by 13 patents in the area of electro-surgical devices from 1975 to 2000. Finally, the third main trajectory refers to 14 patents and it is mainly related to knee braces and knee joints, 1971 to 2002. Table 1 shows the patents in the the first trajectory. It is interesting to compare the results of the main trajectory identification based on SPLC with respect to the application of the PageRank algorithm for the identification of the most relevant patents. The PageRank (Page et al., 1998) algorithm computes the relative weight of a node in a graph based on the number of edges pointing to the node itself, with specific application to web page ranking. In particular, the result of the PageRank is the probability of a user arriving on a given page following links randomly. As previously discussed a page citation graph is a Direct Acyclic Graph (DAG), that is it has no cycles and there are no back references, while cycles are very frequent in web links, with the fact that there is a cyclic dependency in the estimation of the importance of a given node.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
Table 1.
499
Patents in the First Trajectory from the A61-Rehabilitation Patent Set.
Patent Number US3764748 US3870832 US4352960 US4606329 US4817607 US4936305 US5163957 US5259032 US5554096 US5772575 US5941814
US6077215
EP1246503
Title
Year
Implanted hearing aids Implantable electromagnetic hearing aid Magnetic transcutaneous mount for external device of an associate Implantable electromagnetic middle-ear bone-conduction hearing aid device Magnetic ossicular replacement prosthesis Shielded magnetic assembly for use with a hearing aid Ossicular prosthesis for mounting magnet Contact transducer assembly for hearing devices Implantable electromagnetic hearing transducer Implantable hearing aid Arrangement for adjusting and fixing the relative position of two components of an active or passive hearing implant Method for coupling an electromechanical transducer of an implantable hearing aid or tinnitus masker to a middle ear ossicle Completely implantable hearing system
1973 1975 1982 1986 1989 1990 1992 1993 1996 1998 1999
2000
2001
The main trajectories identified with the SPLC can be evaluated in terms of the nodes’ weights computed by PageRank, applied to the citation graph and not the information flow graph. Every source node of the trajectories has the maximum weight respects the others in the trajectory. This is explained by the fact that in a DAG, the PageRank is an accumulation of the results from the citing nodes. It is possible to use the above weights instead of the SPLC for computing the main trajectories. The algorithm is similar to the one before, as it is a selection of all the paths from the nodes with the highest weights and following the connected nodes with highest weight. When applied to the example dataset introduced above, the resulting trajectories are quite similar to the ones reported by the other algorithm. In particular there are some correspondences between the trajectories, like the second of PageRank has the same starting point and topic of the third in SPLC, and the same happens with the fourth of PageRank and the first in SPLC. The analysis of the properties of the trajectories should be completed with some considerations of their time evolution. The first aspect to be covered is the distribution of a given trajectory in the time span of the patent set, analyzing in particular the time distance between patents in the trajectory, and their relative importance. A way to measure the importance of a given patent in the trajectory is to use the importance, expressed as SPLC, of the flows of information coming out
March 15, 2010
500
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al. Timespan of mainpaths 11 10 9 8 Main paths
7 6 5 4 3 2 1 0
1970
1975
1980 1985 1990 Years in the patent set
1995
2000
2005
Figure 2. Evolution along time of the trajectories, shown in alternating color from the most important (lower) to the others.
from the node, clearly removing the next node in the trajectory. That is, if the SPLC is expressed by the matrix S and the trajectory is expressed as pi , then the weight of the patent is wi = sumj=pi+1 Sij . Figure 2 shows the timespan of the first five main paths, in which each dot corresponds to a patent with a size proportional to the relative importance with respect to the others. Some of the trajectories show a slow start behavior in which there is a long time between initial patents, and then in later years are concentrated. In other trajectories, there is a more uniform distributions among patents, showing a continuously improving trend. The other time element is related instead to the capacity of this tool to provide insights about future trends. In particular, it is interesting to verify if the trajectories identified in a given period are relevant in later years. The proposed approach applies the algorithms discussed above in the first half of the patent set time span, for example up to 1975, and computes the trajectories on the reduced graph. Each trajectory is scored in terms of relative importance with respect to the whole graph using its weight. Then the graph is expanded in fixed steps, like five years at a time, and then computes the SPLC (or PageRank) again, with the objective of comparing the new trajectories with the original ones. In this way it is possible to show how the importance of a given trajectory varies along time, and if the estimated trajectories are still valid in the final phase.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
501
From the above discussion, there are two important aspects to be solved. The first is how to understand the relationship between different trajectories, while the second is to compare the results from the SPLC and the PageRank approaches. The following section will discuss about the layout and mapping of the patents that provide insights on the relationship over the trajectories.
3.2. Citation Graph Visualization The information about specific trajectories available in a patent set is interesting, but not that easy to understand. This section will discuss the display of the patent citation graph and describe in detail the role of the main trajectories. In this phase of the analysis, the focus is on the information flow related to citations, and given its graphic representation, the network is presented using graph layout techniques. There are several graph layout approaches, varying on the basis of computing requirements, support for dynamic layout and human perceptual aspects (Purchase et al., 1995). Most of the approaches use an iterative process that computes a set of forces between the nodes and changes the position of the nodes based on some energy minimization principle. In this work, the Force Directed Placement (Fruchterman and Reingold, 1991) has been selected because it provides a good trade-off between layout quality and speed and because it can be easily extended to an online layout (Frishman and Tal, 2008). In the specific case of graph layout for patents, the objective is to highlight the main trajectories and the associated nodes. The visualization can be performed in two ways: in the first way, the whole citation graph has been laid out and then only the main trajectory is plotted, eventually displaying the first two levels of nodes that are connected to the main trajectory. In the second case, the layout starts from the nodes in the main trajectory, then in later iterations of the layout nodes at distances smaller than three are added, while positions of the main nodes are pinned, reducing their movements. The result of this operation is shown in Fig. 3 presenting the main trajectory in red and the connections of the relative nodes in blue. The graph layout can be used also for visually comparing the results from the SPLC and the PageRank application, in particular Fig. 4 shows the SPLC trajectories in red and the PageRank in green. There are two clusters of patents that are covered in similar ways by the two algorithms, while other areas are distinct. The time component can be used in this type of visualization for showing the evolution of a given area along time, showing the patents in intervals of five years, and how trajectories have different timing behaviors. The citation analysis allows us to identify the major blocks and put in relationships patents, but it is still necessary to provide a mapping, that is associating meaningful names to the various regions of patents. The following section will introduce the general problem of textual analysis with patents and its application to mapping.
March 15, 2010
14:45
502
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al. Main trajectory (grey) and associated patents (black)
0.8
0.7
Layout Y
0.6
0.5
0.4
0.3
0.25
Figure 3. in grey.
0.3
0.35
0.4
0.45 Layout X
0.5
0.55
0.6
0.6
Particular of the layout of the patents, with a specific trajectory displayed
Main trajectory (grey) and associated patents (black) 1
0.9
0.8
0.7
Layout Y
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5 Layout X
0.6
0.7
0.8
0.9
Figure 4. Visualization of the two different types of trajectories as computed by SPLC (grey 1) and by PageRank (grey 2).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
503
3.3. Textual Analysis The citation-based analysis discussed in the previous section is useful for the identification of the relationship between patents, and their evolution in time. The previous approach is not yet capable of providing a way to create a patent mapping, that is with associating labels each trajectory. This section discusses the data mining process over the patent set for extracting the major terms and the clusters of patents associated. The information retrieval process is based on a standard sequence of operations that are integrated with the previous analysis pipeline. In particular, every patent pi is a document di , that is, characterized by a text ti extracted from the abstract, description, and claims sections of the patent document. Each word in the document is first stemmed, that is, expressed in its radix form, then terms are removed using a stop list, or eventually positively filtered using a set of terms that have been previously selected. The objective of this way is to transform the stream of words sij in each document, into a new reduced stream of words sij ∗. The Latent Semantic Analysis (LSA), also called Latent Semantic Indexing (LSI), (Papadimitriou et al., 2000; Deerwester et al., 1990; Landauer et al., 1998) approaches textual analysis by transforming each document into a feature vector that has the same size of the different words in the whole set. The feature vectors are collected together in a matrix form that is later transformed by highlighting the group of words that is better able to structure the documents. In particular, a document set of N elements, and a total number of M words, gives a termdocument matrix MxN in which each element wij describes the weight of the word j in the document i. There are several ways of selecting the weight, depending on the chosen algorithm (Berry and Browne, 2005), and these ways can be structured in the component of three components, a local one, a global one, and a normalization: wij = Lij Gi Nj .
(1)
The most common weighting scheme is the term frequency-inverse document frequency (TF-IDF), that is, the product of the term relative frequency and the inverse document frequency. The relative frequency of the term in each document is multiplied by the amount of information provided by every word in the set: ni,j tfi,j = , (2) k nk,j idfi = LOG
|D| , |{dj : ti ∈ dj }|
wi,j = tfi,j · idfi .
(3) (4)
The matrix of weights is then transformed using Singular Value Decomposition (SVD) for identifying the combination of words that are better able to express the
March 15, 2010
504
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
variety of the document set. In particular, N vectors sorted by relevance are obtained, in which each vector describes a concept in the form of a combination of relevant words, forming the matrix V of M by N elements. The LSA can be applied for three purposes, the first being the identification of the most relevant words describing the patent set, the second being the computation of a distance between documents, and the third being the dimensional reduction of the analysis problem. Selecting only the first K columns from the V matrix, it is possible to perform a reduction from the W dimensional problem into a K dimensional problem, and this reduction allows one to approximate the original term matrix with the smallest error. In particular, it is possible to compute the following: W = VK W.
(5)
The W matrix is now in the form N by K, and if K is small, it is possible to apply some visualization or clustering technique for better structuring the patents (Kargupta et al., 2001). The dimensional reduction obtained through LSA is useful for a semantic-based comparison of patents that is also efficient. If the LSA is applied to the reference patent set discussed above, it is possible first to extract the most relevant concepts, then identify the most relevant words for every trajectory. The number of documents is 10,463, while there are 70,487 words obtained after common words removal (like basic English terms and others terms that are extremely common in patent documents) and stemming using the Porter’s algorithm (Hull, 1996). There are anyway two issues in the use of the LSA, both deriving from the use of SVD. The first is the complexity, that corresponds to the one of SVD, O(M 2 K3 ), in which M is the number of terms and K is the number of dimensions. This problem can be partially solved with a filtering of the terms based on a variety of criteria like verbs or specific application domains. The second problem is instead related to the assumption about the distribution of the terms that is considered normal, while frequencies used in the computation are only counts. 3.4. Visualizing Patents as Surface The final aspect for the presentation of patents is the transformation of the simple bi-dimensional layout into a three-dimensional map that contains additional information and can be explored interactively. The citation layout, eventually augmented with information from textual analysis, is presented as a surface in which the height can be obtained from various sources. The first is the density of patents in the layout, which shows the importance of areas with many patents, followed by the distance with respect to the nearest trajectory, into provides insights the relevance of a patent. Finally, there is the possibility of using the priority year for showing the time component. All these approaches are valid for presenting different aspects and will be available to the user in the interactive part.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
505
The computation of the height from the single patent data is performed using Radial Basis Functions (Powell, 1987): every patent is a center of a template surface, typically Gaussian, and the surface is obtained by the superimposition of all these surfaces. The alternative to this process is an interpolation based on triangulation of the point, that for unevenly spaced points like the patents in the layout generates surfaces that are not sufficiently smooth for interactive exploration. 4. Interactive Technology Maps In this section, we are going to present the integration of the techniques previously discussed into the design of an interactive tool for the exploration of technology maps. The focus of the proposed tool is visualization and exploration, and for these, reasons we assume that the patent set has been already collected using other means. The tool should be capable of presenting the major trends in a given patent set, expressed in terms of technology trajectories, in addition to showing the placement of different actors in the patent landscape. In this work, we are not going to explore all the possibilities of Virtual Environments, but instead we focus on the types of information that can be extracted and the fundamental means for presenting it. The interface of the tool is centered around of the patent map that the user can explore and modify using a variety of operations. The basic exploration operations are relative to the possibility of changing the viewpoint in the three-dimensional map, and in the selection of entities in the map, particularly patents and trajectories. When a specific patent is selected the details about it are presented, both on the map and on a side panel, while in the case of a trajectory selection we display the titles and the patents of the trajectory itself. The other group of operations are relative to the filtering of the displayed information, like the number of trajectories displayed, the time span of the patents visualized and the number of neighbor patents associated with a trajectory. The last group of operations is relative to the different layers of information that can be used to augment the map: • • • •
assignee mapping; labeling from the textual analysis; trajectories; link count relevance.
The overall interface of the tool is presented in Fig. 5, showing the plot of a simplified map in the context of medical devices. 4.1. Tool Components The software discussed in this work is structured in four major components managing the patent analysis pipeline from data acquisition to visualization.
March 15, 2010
506
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
Figure 5. Interface of the tool presenting patents in the area of medical device. The tool allows to select the trajectories, single patents and to move around in the map.
The first component, written in the Python programming language, provides the search and download of patents from the major databases and normalizes the acquired data for later processing. In particular, it is possible at this level to perform recursive downloads of referenced patents for creating the patent set. This component generates the input for the other components in matrix form, associating an index to every patent in the set. The second component is Citation Analysis, which takes the patents’ citation graph and computes from that the main trajectories. This component, like the rest, is implemented as a set of MATLAB (The Mathworks, Inc.) scripts with the exception of the SPLC that for performance reasons required a C++ implementation. Also, the graph layout based on Force Directed Placement (FDP) required an optimization, and in particular we have exploited the performance of GPU (Graphic Processing Units) creating a CUDA (Nickolls and Buck, 2007) based version of the FDP that replaces the reference implementation provided by Graphviz (Ellson et al., 2002). The textual analysis component receives the term-document matrix from the Python scripts and performs the LSA over the matrix, taking advantage of the sparse matrix capabilities of MATLAB. Finally, the visualization component integrates the data coming from the previous component with a 3D visualization, that in this phase has been provided by
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
507
MATLAB for fast prototyping. Future investigation will move into the domain of Virtual Environments for providing immersive presentation of the patent maps. 5. Case Studies In this section, the results of the application of the techniques discussed above to the analysis of specific technological areas are presented. These examples provide insights into the specific types of algorithms developed, and in general, the methodology necessary to apply it to understand a specific technology area. Each analysis has the same structure, an introduction with motivation, the construction of the patent set, the identification of the major trajectories, and a brief discussion of the results. 5.1. Sensing for Virtual Reality The first case study is relative to sensing technologies for Robotics and Virtual Reality, with a specific focus on position and orientation sensors created with a variety of technologies. The objective of this study is first to identify the major players in the area, the different technologies and possible trends. The starting point for the construction of the patent set are a some patents from well-known companies, like Polhemus, Ascension, and Invensense. The resulting patent set is made of 3617 patents taken from USPTO ranging from the beginning of the 20th century up to 2006, in which there are 1645 sources and 206 targets, in which there are 15 isolated patents, and 21 components, of which the biggest has about 3400 patents. The application of the citation analysis allows us to identify some trends that cover the last 30 years of patents. The resulting map is shown in Fig. 6: • • • • •
from accelerometers to micro-machined gyroscopes; rate and accelerator sensors with movable parts; magnetic sensors from initial types to disturbance detection; from magnetic compass to electronic compass in vehicles and from optical trackers to object tracking for virtual reality.
5.2. Rehabilitation Devices In this work, a road-mapping of specific fields of technologies related to rehabilitation is discussed. In particular, we started from a preliminary query on patents in the IPC class A61 containing the word prefix “rehab” in the title and in the abstract, from USPTO. This starting set, after duplication removal, was made of 622 patents. The second phase of the data extraction has been based on the recursive retrieval of patents following the citation graphs, giving 10,463 patents after two-level recursion, and 3851 with one-level recursion. This investigation is based on the set of 10,463 patents.
March 15, 2010
508
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
Figure 6. The following is the map of sensors related patents highlighting the five main areas of technologies, weighted by the number of associated patents. In each area the major trajectories are presented as curves.
The major trajectories are as follows: • from simple hearing aids to rehabilitation of hearing; • from basic knee brace to anatomically and neurophysiologically-designed knee braces; • from leg-holders to training devices for joints. When looking at the weights of the above trajectories, most of the patents are relative to hearing aids and rehabilitation of hearing sense. Finally, the associated map is shown in Fig. 7. 6. Main Conclusions and Future Research Directions The range of arguments about the positive social value of patents and the related applications is, for sure, more extensive than the one presented in this chapter. This is particularly true when codified information stored in a patent database flows without much friction from one place to another; it can reach different parts of the world at very low cost by exploiting Internet diffusion. Nonetheless, we have to consider some delicate aspects with the respect to research institutions patenting activities. The main concern is related to the fact that patent analysis covers
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
509
Figure 7. Map of the rehabilitation area, in which the highest peak is related to the hearing aids, while the lower focus on leg braces and joints.
only patented technologies, discarding the ones that are not protected. Secondly, due to procedural reasons there is a gap based on the Priority Date along with the Publication Date: the former represents the first date of filing of each patent, while the latter is the first public disclosure of the patent itself. This causes a delay in the identification of a possible interested new technology. Some economists stress the fact that intellectual property rights may provide difficulties in managing and maintaining the related rewards; it sometimes decreases or drives away the scientific progress of research institutions due to restrictions caused by disclosure agreements necessary for filing a patent application. Notwithstanding some weak points to be considered for drafting the context of analysis, the relevance of the information stored in patents remains unchanged. The tool introduced in this work can be extended in many directions. The first is relative to the types of analysis performed, in particular relative to the time and textual component, showing the relative weight between trajectories and areas in the map. The second is relative to the mapping, that it should increase its capacity of conveying information to the user, with particular attention given to timing and roles of different actors in the patent set. The role of research institutions in the technology space could be improved by integrating the information coming from publications, with the objective of fusing bibliometrics with patent analysis. Finally, there is the fundamental move to the Virtual Environment, in which the map is transformed into an immersive environment, that provides more flexibility and intuitiveness for the data interpretation process (Benford et al., 1995; Robertson et al., 1993).
March 15, 2010
510
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
References Abernathy, W and J Utterback (1978). Strategic Management of Technology and Innovation. Patterns of Industrial Innovation. Irwin, Homewood, IL,USA: McGraw-Hill/Irwin, pp. 154–160. Abraham, B and S Moitra (2001). Innovation assessment through patent analysis. Technovation, 21(4), 245–252. Albert, M, D Avery, F Narin and P McAllister (1991). Direct validation of citation counts as indicators of industrially important patents. Research Policy, 20(3), 251–259. Benford, S, D Snowdon, C Greenhalgh, R Ingram, I Knox and C Brown (1995). VRVIBE: A virtual environment for co-operative information retrieval. Computer Graphics Forum, 14(3), 349–360. Berry, M and M Browne (2005). Understanding Search Engines: Mathematical Modeling and Text Retrieval. Philadelphia, US: Society for Industrial Mathematics. Caldwell, B, E Wang, S Ghosh and C Kim (2005). Forecasting multiple generations of technology evolution: Challenges and possible solutions. International Journal of Technology Intelligence and Planning, 1(2), 131–149. Camus, C and R Brancaleon (2003). Intellectual assets management: From patents to knowledge. World Patent Information, 25(2), 155–159. Carson, J, E Nelson and N Durrance (2004). How to effectively build and protect business assets with a strategic patent portfolio. Intellectual Property Law Bulletin, 9, 1. Charnes, A, W Cooper and E Rhodes (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2(6), 429–444. Christensen, C (1997). The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Cambridge, MA: Harvard Business School Press. Cohen, W, R Levin and D Mowery (1987). Firm size and R&D intensity: A re-examination. Technical Report, National Bureau of Economic Research. Cohen, W and D Levinthal (2000). Strategic Learning in a Knowledge Economy: Individual, Collective and Organizational Learning Process. Absorptive capacity: A new perspective on learning and innovation. Boston, MA, USA: Butterworth-Heinemann, pp. 39–67. Cohen, W, R Nelson and J Walsh (2000). Protecting their intellectual assets: Appropriability conditions and why US manufacturing firms patent (or Not). Technical Report, National Bureau of Economic Research. Corso, M and L Pellegrini (2007). Continuous and discontinuous innovation: Overcoming the innovator dilemma. Creativity and Innovation Management, 16(4), 333–347. Daft, R and S Becker (1978). The Innovative Organization: Innovation Adoption in School Organizations. New York, USA: Elsevier. Deerwester, S, S Dumais, G Furnas, T Landauer and R Harshman (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6), 391–407. Dosi, G (1982). Technological paradigms and technological trajectories: A suggested interpretation of the determinants and directions of technical change. Research Policy, 11(3), 147–162. Ellson, J, E Gansner, L Koutsofios, S North and G Woodhull (2002). Graphviz-open source graph drawing tools. Lecture Notes in Computer Science, 2002(2265), 483–484. esp@cenet (2005). Introduction to esp@cenet, URL http://ep.espacenet.com.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
511
Ettlie, J (1983). A note on the relationship between managerial change values, innovative intentions and innovative technology outcomes in food sector firms. R&D Management, 13(4), 231–244. Eysenbach, G (2006). Citation advantage of open access articles. PLoS Biology, 4(5), 10–15. Fiala, D, F Rousselot and K Jeˇzek (2008). PageRank for bibliographic networks. Scientometrics, 76(1), 135–158. Fleming, L and O Sorenson (2001). Technology as a complex adaptive system: Evidence from patent data. Research Policy, 30(7), 1019–1039. Fleming, L, O Sorenson, Department of Research and HB School (2004). Science as a map in technological search. Strategic Management Journal, 25(8/9), 909. Frishman, Y and A Tal (2008). Online dynamic graph drawing. Visualization and Computer Graphics, IEEE Transactions on, 14(4), 727–740. Fruchterman, T and E Reingold (1991). Graph drawing by force-directed placement. Software — Practice and Experience, 21(11), 1129–1164. Ghasemzadeh, F and N Archer (2000). Project portfolio selection through decision support. Decision Support Systems, 29(1), 73–88. Gould, S and N Eldredge (1977). Punctuated equilibria; The tempo and mode of evolution reconsidered. Paleobiology, 3(2), 115–151. Hall, B (2004). Exploring the patent explosion. The Journal of Technology Transfer, 30(1), 35–48. Hall, B, A Jaffe and M Trajtenberg (2000). Market Value and Patent Citations: A First Look. National Bureau of Economic Research. Harrison, S and K Rivette (1998). Profiting from Intellectual Capital. Extracting Value from Innovation. The IP portfolio as a competitive tool. New York, US: John Wiley & Sons, 119–128. Hayes, R and W Abernathy (2007). Managing our way to economic decline. Harvard Business Review, 85(7/8), 138. Helo, P (2003). Technology trajectories in mobile telecommunications:Analysis of structure and speed of development. International Journal of Mobile Communications, 1(3), 233–246. Henriksen, A and A Traynor (1999). A practical R&D project-selection scoring tool. Engineering Management, IEEE Transactions on, 46(2), 158–170. Hong, S (2006). The magic of patent information. The World Intellectual Property Organization, 10, p. 25, http://www.wipo.int/sme/en/documents/patent information.htm Hull, D (1996). Stemming algorithms: A case study for detailed evaluation. Journal of the American Society for Information Science, 47(1), 70–84. Hummon, N and P Doreian (1989). Connectivity in a citation network: The development of DNA theory. Social Networks, 11(1), 39–63. Janssens, F (2007). Clustering of scientific fields by integrating text mining and bibliometrics, Ph.D. thesis, Catholic University of Leuven. Jenkins, M and S Floyd (2001). Trajectories in the evolution of technology: A multi-level study of competition in formula 1 racing. Organization Studies, 22(6), 945. Kargupta, H, W Huang, K Sivakumar and E Johnson (2001). Distributed clustering using collective principal component analysis. Knowledge and Information Systems, 3(4), 422–448. Karki, M (1997). Patent citation analysis: A policy analysis tool. World Patent Information, 19(4), 269–272.
March 15, 2010
512
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
Kauffman, S (1993). The Origins of Order: Self-Organisation and Selection in Evolution. New York, NY, US: Oxford University Press. Landauer, T, P Foltz and D Laham (1998). An introduction to latent semantic analysis. Discourse Processes, 25(2–3), 259–284. Lin, B, C Chen and H Wu (2006). Patent portfolio diversity, technology strategy, and firm value. Engineering Management, IEEE Transactions on, 53(1), 17–26. Macdonald, S (1998). Information for Innovation: Managing Change from an Information Perspective. New York, NY, USA: Oxford University Press. Magnusson, T, G Lindstrom and C Berggren (2003). Architectural or modular innovation? Managing discontinuous product development in response to challenging environmental performance targets. International Journal of Innovation Management, 7, 1–26. Martinelli, A, M Meyer and N Tunzelmann (2008). Becoming an entrepreneurial university? A case study of knowledge exchange relationships and faculty attitudes in a medium-sized, research-oriented university. The Journal of Technology Transfer, 33(3), 259–283. McFetridge, D and M Rafiquzzaman (1986). The scope and duration of the patent right and the nature of research rivalry. Research in Law and Economics: The Economics of Patents and Copyrights, 1986, 8, 91–120. Meyer, M, J Utecht and T Goloubeva (2003). Free patent information as a resource for policy analysis. World Patent Information, 25(3), 223–231. Michel, J and B Bettels (2001). Patent citation analysis. A closer look at the basic input data from patent search reports. Scientometrics, 51(1), 185–201. Nickolls, J and I Buck (2007). NVIDIA CUDA software and GPU parallel computing architecture. Microprocessor Forum, May. Page, L, S Brin, R Motwani and T Winograd (1998). The pagerank citation ranking: Bringing order to the web. Technical Report, Stanford University. Pan, S and D Leidner (2003). Bridging communities of practice with information technology in pursuit of global knowledge sharing. Journal of Strategic Information Systems, 12(1), 71–88. Papadimitriou, C, P Raghavan, H Tamaki and S Vempala (2000). Latent semantic indexing: A probabilistic analysis. Journal of Computer and System Sciences, 61(2), 217–235. Powell, M (1987). Radial basis functions for multivariable interpolation: A review. Clarendon Press Institute of Mathematics and Its Applications Conference Series, 143–167. Purchase, H, R Cohen and M James (1995). Validating graph drawing aesthetics. Lecture Notes in Computer Science, 1027(1), 435–446. Robertson, G, S Card and J Mackinlay (1993). Information visualization using 3D interactive animation. Communications of the ACM, 36(4), 57–71. Rosenberg, N (1982). Inside the Black Box: Technology and Economics. Cambridge, MA, US: Cambridge University Press. Sani, E, A Frisoli and M Bergamasco (2007). Patent based analysis of innovative rehabilitation technologies. In Proceedings of Virtual Rehabilitation, 96–101. Saviotti, P and J Metcalfe (1984). A theoretical approach to the construction of technological output indicators. Research Policy, 13(3), 141–151. Schmidt, R and J Freeland (1992). Recent progress in modeling R&D project-selection processes. Engineering Management, IEEE Transactions on, 39(2), 189–201. Schmoch, U and H Grupp (1992). Dynamics of Science-Based Innovation. Perceptions of scientification of innovation as measured by referencing between patents and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
Interactive Technology Maps
513
papers: Dynamics in science-based fields of technology. Berlin, DE: Springer Verlag, 73–128. Shapiro, C and H Varian (1999). Information Rules. Boston, MA: Harvard Business School Press. Thomas, K (2007). Prophet of Innovation: Joseph Schumpeter and Creative Destruction. Cambrdge, MA: Belknap Press of Harvard University Press. Tian, Q, J Ma, J Liang, R Kwok and O Liu (2005). An organizational decision support system for effective R&D project selection. Decision Support Systems, 39(3), 403–413. Trippe, A (2003). Patinformatics: Tasks to tools. World Patent Information, 25(3), 211–221. van Dulken, S (1999). Free patent databases on the Internet: A critical view. World Patent Information, 21(4), 253–257. Verspagen, B (2007). Mapping technological trajectories as patent citation networks: A study on the history of fuel cell research. Advances in Complex Systems, 10(1), 93. Yeap, T, G Loo and S Pang (2003). Computational patent mapping: Intelligent agents for nanotechnology. MEMS, NANO and Smart Systems, 2003. Proceedings of International Conference on, pp. 274–278.
Biographical Notes Elisabetta Sani is currently a PhD Student at the Scuola Superiore S. Anna, Pisa, Italy on Innovative Technologies working in PERCRO laboratory under the supervision of Prof. Massimo Bergamasco. She received a MSc degree in Economics in 2004 New Technologies for cultural heritage management. Her research interests deals with the modeling of the technology transfer between research institutions and industries and in particular she has focused her attention on patent-based analysis as a tool for research planning. She has been involved in the management of several European projects and as workpackage leader of Innovation Activities of the SKILLS IP. Emanuele Ruffaldi (Eng., PhD) is currently an Assistant Professor in Applied Mechanics of the PERCRO, Scuola Superiore S. Anna, Pisa, Italy. He has obtained his PhD on Perceptual Robotics in 2006 from Scuola Superiore S. Anna discussing a thesis on perceptually inspired Haptic Algorithms. He received a MSc degree in Computer Engineering in 2002 on a thesis for the visualization of databases based on the metaphor of Information Landscapes. His research interests are in the fields of design of Virtual Reality system in which human, robots and information are integrated. His interaction interests are related to Haptics, in particular haptic rendering, and to the modeling of human skills through intelligent algorithms. He is currently involved in the research activities of the European projects SKILLS IP, ENACTIVE NoE and Decision In Motion STREP. Emanuele has been the author of more than 20 papers published on International Journals and proceedings of scientific workshops. He has been the general chair of the ENACTIVE08 International Conference.
March 15, 2010
514
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch20
E. Sani et al.
Massimo Bergamasco is a Full Professor in Applied Mechanics at the Experimental Science Faculty of Scuola Superiore S. Anna (SSSA), Pisa, Italy. His research activity deals with the study and development of haptic interfaces for the control of the interaction between humans and virtual environments, where he is interested in the kinematics aspects of the design of haptic mechanisms. He has been the Scientific Coordinator of several national projects and of nine EU projects. He has published more than 200 scientific papers in journals and/or international conferences proceedings. He is a Member of the Editorial Board of IEEE Computer Graphics andApplication, Journal of the VR society, Journal Europèen De Systemes Automatis´es and Haptics-e. He is also Member of the Organizing Committee of the following international Conferences: SPIE: Telemanipulator and Telepresence Technologies, IEEE-RO-MAN, EuroHaptics and IEEE WorldHaptics. During his research activities he has already organized the following conferences/workshops: Workshop GLAD-IN-ART, First International Conference on Virtual Environments in Rehabilitation, FIVE 96, RO-MAN 99 and World Haptics Conference 2005. He is presently the Coordinator of the SKILL IP on capturing and transfer of human skills.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Chapter 21
Determining Key Performance Indicators: An Analytical Network Approach DANIELA CARLUCCI∗ and GIOVANNI SCHIUMA∗,† ∗ Center for Value Management, LIEG-DAPIT, Universita’ della Basilicata, Via dell’Ateneo Lucano, 10, Potenza, Italy † Centre
for Business Performance, Cranfield School of Management, Cranfield, Bedfordshire MK43 0AL, UK ∗ [email protected] † [email protected]
Selecting performance indicators is one of the major challenges that companies have to face in developing a performance measurement system. This is because selecting means evaluating a set of performance indicators against multiple criteria, sometimes potentially conflicting. This chapter describes a decision model, based on the analytic network process (ANP) method, to drive managers in the selection of key performance indicators (KPI). The model takes into account the main criteria required by performance indicators to enrich the quality of a company’s information system, and involves ANP to extract weights for setting the priorities among performance indicators. In particular, by applying ANP, the weights do not only merely result from a top-down process carried out by judging how well the performance indicators perform against the criteria, but also from a judgment process which takes into account the feedback relationships between the criteria and performance indicators as well as the mutual interactions of indicators. This is important because when decision makers select performance indicators, they often do not consider the dependency of criteria on the available performance indicators and the interdependency among indicators or, at most, consider vaguely those dependencies. This chapter provides evidence for the feasibility of the model, through its application to a real case. Keywords: Performance indicators; selection; multicriteria decision method; analytic network process; case example.
1. Introduction In today’s competitive scenario, organizations are forced to monitor their performance on a sustained basis, across several dimensions. However, developing an effective performance measurement system remains a challenge. Several problems 515
March 15, 2010
516
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma
regarding the building of a performance measurement system, such as the accuracy of data, the definition of meaningful metrics to support decision-making processes, and the measurement of company’s intangible aspects, continue to represent challenges for managers. In order to be useful, performance measurement systems should ensure that performance is clearly defined and metrics accurately measure performance. Closely related to this subject is the selection of valuable performance indicators. The selection of performance indicators is one of the major challenges of companies developing and implementing a performance measurement system. The performance indicators represent the means for monitoring business processes and related outcomes to drive management actions. Therefore, the selection of performance indicators has to be addressed: limiting the number of indicators to prevent information overload, to avoid confusion for their potential users, to provide a clear picture of the critical organizational competitive factors and to facilitate the overall measurement. Any effective performance measurement system has to adopt a limited number of indicators, i.e., key performance indicators (KPIs), capable of providing an integrated and complete view of company’s performance and to measure progress toward organizational goals. The problem is that often, managers measure too much and spend a lot of time and effort to quantify all the aspects of the company. This results in the generation of a great amount of indicators. The performance indicators’ selection is a complex decision-making process for managers, as it requires taking into account several criteria as well as their relative weight in choosing indicators. In particular, selecting performance indicators is a multi-criteria decision-making (MCDM) problem, as it regards a finite set of available performance indicators characterized by multiple criteria, sometimes potentially conflicting. The selection is aimed to define the “optimum” set of indicators that meets the management’s needs. Recently, different evaluation problems or studies have applied the ANP, as a method to solve MCDM problems (Erdogmusa et al., 2005; Meade and Presley, 2002; Meade and Sarkis, 1998, 1999). Regarding the use of ANP to address topics related to performance measurement and management subject, it is still very limited. On the contrary the analytic hierarchy process (AHP) had a wider application. Acknowledging that it is not true that forcing an ANP model always produces better results than using the hierarchies of the AHP, the chapter argues that ANP disentangles the question of the choice of performance indicators better than AHP. In particular, in this chapter, a decision model based on the application of the ANP (Saaty, 1988, 1996, 2004) to drive managers in the selection of KPIs to include within a performance measurement system is proposed. The proposed model takes into account the main qualitative characteristics required by performance indicators to enrich the quality of a company’s information system and, therefore, to support decision-making processes. These characteristics have been defined by reviewing
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Determining Key Performance Indicators
517
the management literature focused on the analysis of the information quality as well as of the qualitative aspects required by financial and performance information. They represent the basic criteria for selecting KPIs and make up, along with performance indicators, the building blocks of the model. The strength of the model is basically related to the application of the ANP for selecting and prioritizing performance indicators. In particular, the application of the ANP allows the model to face dependencies among indicators and to improve the quality of the selection of performance indicators. In fact, by applying the ANP, the relative importance of performance indicators to select does not only merely result from a top-down process carried out by judging how well the performance indicators perform against the criteria, but also from a judgment process which takes into account the feedback relationships between the criteria and performance indicators as well as the mutual interactions of indicators. This chapter is organized as follows. First, the research background is introduced. Specifically, a brief discussion about the outstanding subject of performance measurement and the importance of selecting KPIs is presented. Then, looking at accounting and management information system literature, the basic criteria for assessing performance indicators are defined. Afterward, on the basis of theoretical insights, the ANP-based decision model is introduced. Specifically, the methodology of ANP is discussed as well as the relevance and reasons on the basis of the adoption of the ANP for selecting the performance indicators. Then, the chapter reports a detailed description of the application of the model in a real case. It refers to the assessment and selection of performance indicators regarding the manufacturing process of an Italian manufacturer operating in sofa industry. Finally, in the light of empirical evidence, this chapter provides a critical analysis of the model and outlines some possible future extensions and development in research and practice. 2. Performance Measurement Systems and KPI In the last decades, the increasing pressure of global competition and the continuous and fast changes of the competitive scenario have gradually pushed companies to reexamine and improve their management practices. In particular, against the major need to enhance the level of “added value” in their processes, product and services as well as to enhance continuously the value created for their stakeholders, companies have acknowledged the importance of measuring and effectively managing their performances. Particularly, as argued by Neely (1998), performance measurement has become an outstanding topic in management agenda due to seven main reasons: (i) changing nature of work; (ii) increasing competition; (iii) specific improvement initiatives; (iv) national and international quality awards; (v) changing organizational roles; (vi) changing external demands; and (vii) the power of information technology. Looking at recent management literature, the subject of performance measurement appears particularly vast and extremely attractive. Numerous authors regularly add to the body of literature on the subject.
March 15, 2010
518
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma
The amount of theoretical contributions on the subject demonstrates the huge debate that currently exists on performance measurement and its importance within the business community. Specific attention has been paid to the definition of performance measures (Kaplan and Norton, 1996, 2000; Neely et al., 1995, Neely, 2005). In this regard, the management literature highlights that performance measurement has to consider a multidimensional panel of performance indicators, able to capture a multidimensional view of a company (Neely and Wilson, 1992). In particular, performance measurement has to embrace financial and non-financial aspects. It has to consider both internal and external organizational context as well as to include measures both related to achieved results and suitable for predicting the future. In other terms, performance indicators should capture company’s performance from several perspectives. On the other hand, a great amount of measures can disorientate managers and undermines company’s ability to effectively plan and implement its strategy. It follows, therefore, determining the best company’s performance measures is undeniably a complex decision problem. Several recent performance measurement models (e.g., Balanced ScoreCard, Kaplan and Norton, 1996; EFQM, 1999; MBQA, 2007; Performance Prism, Neely et al., 2000) have highlighted the importance of introducing a selected number of KPIs able to provide an holistic overview of the company. KPIs are at the heart of a performance measurement system. They define the data to be collected to measure progress and enable actual results achieved over time to be compared with planned results. Thus, they are an indispensable management tool for making performance-based decisions about company’s program strategies and activities. The selection of meaningful performance indicators, such as KPIs, is a critical managerial and organizational task. As outlined by Neely (1998), by right measurements an organization can: (i) check its position, that it knows where it is and where it is going; (ii) communicate its position according to two perspectives — that is, an internal perspective (i.e., the organization internally communicates to reward and/or spur individuals and teams) and an external perspective (i.e., an organization externally communicates to cope with legal requirements or a market’s needs) (iii) confirm priorities, since by measuring, it is possible to identify how far a company is from its goal; and (iv) compel progress, that means an organization can use measurement as means to motivate and communicate priorities, and as a basis for reward. One of the main dilemmas related to the implementation of performance measurement systems is that often managers measure too much and spend a lot of time and effort quantifying all the aspects of the company. One of the possible reasons causing this is that the development and selection of performance measurement indicators is a complex multicriteria decision problem, which involves various issues such as defining multiple criteria and taking
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Determining Key Performance Indicators
519
into account the mutual dependency of the importance of criteria on the available performance indicators as well as the dependency of the priorities of performance indicators on the available performance indicators. Particularly important is the issue related to the interactions between performance indicators. As underlined by Lange et al. (2007), performance indicators are only a measurable expression of the underlying system performance, a system which is ordinarily complex in nature. It follows, therefore, that it would be beneficial to consider and understand the interdependences between performance indicators to better utilize them in evaluating the options for improvements in system performance and the monitoring of an often complex system. In summary, selecting appropriate and useful KPIs is a crucial and, at the same time, intricate question. Dealing with it requires careful thought, iterative refining, collaboration, and consensus building at organizational level. The use of decision support systems can properly support managers in facing this critical matter. 3. Selection Criteria for Performance Indicators The selection of indicators can be considered as the final stage of the definition of performance indicators. Once a useful initial list of performance indicators has been created, the next step is to assess every possible indicator against a set of criteria, which guarantee quality and meaningfulness of the indicators. This assessment aims at defining the “optimum” set of indicators that meets the management’s need or, in other terms, the KPIs. A number of studies have addressed the criteria for assessing the quality and utility of information enclosed in performance indicators. With specific reference to the accounting information of financial reports, Accounting Standard Board (1991) and Financial Accounting Standards Board (1980) examine the characteristics that make accounting information useful. In particular, Financial Accounting Standards Board (1980) makes a clear distinction between the user-specific qualities, such as understandability, from qualities inherent to accounting information. Then, it identifies a number of criteria to assess the quality of the accounting information in terms of decision usefulness, such as relevance, comparability, and reliability. Additionally, for each one of them, it specifies several detailed qualities and highlights the importance that the perceived benefits derived from performance indicator disclosure must exceed the perceived costs associated with it. Moreover, Financial Accounting Standards Board (1980) outlines that, although ideally the choice of an indicator should produce information satisfying all cited criteria, sometimes it is necessary to sacrifice some of one quality to gain in another. Further suggestions about the selection criteria for performance indicators have been provided in the literature and particularly in management information system
March 15, 2010
520
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma
literature. Holzer (1989) proposes criteria for selecting performance indicators and measures, distinguishing data criteria (i.e., availability, accuracy, timeliness, security, costs of data collection) from measurement criteria (i.e., validity, uniqueness, and evaluation). Niven (2006) argues performance indicators have to be linked to strategy, quantitative, built on accessible data, easily understood, counterbalanced, relevant, and commonly defined. According to USAID (1996), good performance indicators are direct, objective, adequate, quantitative, where possible, disaggregated, where appropriated, practical and reliable. Ballou et al. (1998), modeling an information manufacturing system, consider four criteria of information products: timeliness, data quality, cost, and value. Wang and Strong (1996) analyzing data quality for consumers, sustain high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible. The analysis of literature highlights how many criteria, frequently interchangeable, with the juxtaposition of their meaning have been proposed. Starting from the suggestions and concepts provided both in accounting and management information literature, the following criteria have been identified to select relevant performance indicators useful for decision making: 3.1. [Cr.1] Relevance A relevant performance indicator provides information that makes a difference in a decision by helping users to either form predictions about the outcomes of past, present, and future events or to confirm or correct prior expectations. It deals with the predictive value and/or feedback value. Feedback value refers to the quality of information that enables users to confirm or correct prior expectations, while predictive value stands for the quality of information that helps users to increase the likelihood of correctly forecasting the outcome of past or present events (Financial Accounting Standards Board, 1980). A critical feature of relevance is timeliness. In fact, the information provided by the indicator has to be available to decision makers before it loses its capacity to influence decisions. 3.2. [Cr.2] Reliability Reliability refers to the quality of a performance indicator that assures that it is reasonably free from error and bias, and faithfully represents what it purports to represent (Financial Accounting Standards Board, 1980). Therefore relevance is related to the directness or adequateness of information, i.e., the capacity of an indicator to measure as closely as possible the result it intends to measure. High directness means lack of duplicated information provided by indicators or, in other terms, uniqueness of indicators. The Financial Accounting Standards Board (1980) describes reliability in terms of representational faithfulness, verifiability, and neutrality. Representational
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Determining Key Performance Indicators
521
faithfulness is the correspondence between a measure and the phenomenon that it purports to represent. One of the issues which affects representational faithfulness is the availability of data used to build indicators. This availability in turn affects the costs of data collection. Verifiability is the ability through consensus among measurers to ensure that information represents what it purports to represent or that the chosen measurement method has been used without any error or bias. Finally, neutrality is the absence in reported information of bias intended to either attain a predetermined result or to induce a particular mode of behavior. 3.3. [Cr.3] Comparability Comparability refers to the quality of information related to a performance indicator that enables users to identify similarities and differences between two sets of economic phenomena, while consistency is the conformity of an indicator from period to period with unchanging policies and procedures. The Financial Accounting Standards Board (1980) underlines: Information about a particular enterprise gains greatly in usefulness if it can be compared with similar information about other enterprises and with similar information about the same enterprise for some other period or some other point in time. Comparability between enterprises and consistency in the application of methods over time increases the information value of comparisons of relative economic opportunities or performance (p. 6).
3.4. [Cr. 4] Understandability This criterion deals with aspects related to the meaning and format of data collected to build a performance indicator. Performance indicators have to be interpretable as well as easy to understand for users. They have to be easily communicated and understood both internally and externally, or at least presented in an easily understandable and appealing way to both the target audience and users. Moreover, indicators have to be concise and unsophisticated. The cited criteria represent the essential reference for selecting the most appropriate indicators and achieving the goal of effectively and efficiently measuring performance in the company. 4. A Network Model to Select and Prioritize Performance Indicators Focusing merely on the decision problem of the selection of performance indicators from among those defined in an initial list, this study suggests a network model, based on the use of the ANP method, to support management in assessing and selecting KPIs related to a specific performance dimension of an organization. In the following, the network model along with the methodology underpinning its construction is described in detail.
March 15, 2010
522
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma
4.1. The ANP Methodology The ANP is a Multi Criteria Decision Method, which extends the AHP method (Saaty, 1980). TheAHP is a widely used MCDM method based on the representation of a decision problem by a hierarchical structure, where elements are uncorrelated and unidirectionally affected by the hierarchical relationships. Based on pairwise comparisons to derive ratio scale priorities for the distribution of influence among the elements, the AHP helps people to set priorities and make the best decision when both qualitative and quantitative aspects of a decision need to be considered. However, AHP has several shortcomings. One of the main shortcomings is related to the fact that the decision-making processes cannot always be structured by the hierarchy of the elements — generally goal, criteria and alternatives — involved in the decision problem. This can be due to some interactions and feedback dependencies between those elements that belong to same and/or different levels of the hierarchy. For example, it has long been observed that decision making is not strictly a top-down process carried out by judging how well the alternatives of choice perform on the criteria. The criteria themselves are often dependent on the available alternatives. More specifically, not only does the importance of the criteria indicate the importance of the alternatives as in hierarchy, but also the importance of alternatives may affect the importance of the criteria (Saaty, 1996). This calls for some kind of iteration or feedback dependencies among decision elements. As a result, for some decision-making problems, a more holistic approach capable of capturing all kinds of interactions makes accurate predictions and, finally, better decisions are needed. The ANP, developed by Saaty (1996), can be applied to satisfy such a request. The ANP generalizes the AHP by replacing hierarchies with a network system, which comprises all the possible elements of a problem and their connections. In other terms, while AHP decomposes a problem into several levels containing the different elements of a decision in such a way that they form a hierarchy, ANP does not impose a strict hierarchical structure but it enables interrelationships among the decision levels and elements in a more general form by modeling the decision problem using a network structure. In particular, the network structure consists of clusters of elements, rather than elements arranged in levels. The simplest network model has a goal cluster containing the goal element, a criteria cluster containing the criteria elements, and an alternatives cluster containing the alternative elements. The major difference between the AHP and ANP is the existence of a feedback relationship among the levels within the network system in the ANP. The ANP allows, by using the network, to include all the factors and criteria that have a bearing on making a best decision as well as to capture both interaction and feedback within clusters of decision elements (inner dependence) and between clusters (outer dependence). While outer dependence implies the dependence among clusters in a way to allow for the feedback circuits, inner dependence is related to the dependence within a cluster combined with feedback among clusters.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Determining Key Performance Indicators
Cl.1••••
523
Outer dependence
Cl.2•••• Feedback
Cl.3••••
Cl.4••••
Inner dependence
Figure 1. Feedback network with clusters having inner and outer dependence among their elements.
A network system with feedback and inner and outer dependency representation is given in Fig. 1. In the graphical representation, there are two-way arcs, within the same level and among levels. The first, a looped arc, shows the inner dependency relationships that occur within the same level of analysis. The second, a hierarchical arc, shows a dominance or control of one level of elements over another set of elements. The implementation of the ANP involves four main steps. 4.1.1. Step 1 — Model construction and problem structuring The first step consists of clearly defining the decision problem and structuring it into a rational system such as a network. The network model can be built on the basis of decision makers and, eventually, experts’ opinions that can be collected by means of several methods such as interviews, brainstorming, focus group, and so on. The model has to include those information and issues, appropriately consolidated and categorized, estimated as relevant to the decision. In this phase, it is also important to establish the body of decision makers involved in decision-making process. 4.1.2. Step 2 — Pairwise comparisons matrices of interdependent component levels Similar to AHP, the ANP is based on deriving ratio scale measurements founded on pairwise comparisons to derive ratio scale priorities for the distribution of influence among the elements and clusters of the network. In particular, on the basis of the inner and the outer dependencies, elements of each cluster and clusters themselves are compared pairwise. As in AHP, in pairwise comparison, decision makers compare two elements or two clusters at a time in terms of their relative importance with respect to their particular upper-level element or cluster and express their
March 15, 2010
524
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma
judgments on the basis of Saaty’s scale (1980). In particular, Saaty’s scale allows the decision maker to assign relative ratings, by expressing his preference between each pair of elements verbally as equally important, moderately more important, strongly more important, very strongly more important, and extremely more important. These descriptive preferences would then be translated into numerical values 1, 3, 5, 7, 9, respectively, with 2, 4, 6, and 8 as intermediate values for comparisons between two successive judgments. Therefore, by comparing decision elements of the network pairwisely, relative ratings are assigned and a paired comparison matrix can be formed as a result (see Fig. 2). In the matrix, the aij = 1/aji expresses ratio scale priorities by making paired comparisons of elements, where aij denotes the importance of the ith element compared to the jth element. In the matrix A, w1 , w2 , . . . , wn stand for the relative weights of the decision elements. Once the pairwise comparisons have been completed to estimate the relative weights of the decision elements in a matrix, the priority of the element is compared by the computation of eigenvalues and eigenvectors with the following formula (Saaty, 1980): Aw = λw where λmax is the largest eigenvalue of A, and w is the eigenvector, the weight ¯ with A pairwise comparison matrix. vector, of A,
w1 w1 w2 w 1 A= . .. wn
w1
a11 a21 = .. . an1
w1 w2 w2 w2 .. .
···
wn w2
···
a12 a22 .. . an2
··· .. .
w1 wn w2 wn .. . wn
wn · · · a1n · · · a2n .. , . aij · · · ann
Figure 2.
where aij =
−1 aji
1
Pairwise comparison matrix.
if
i = j
if
i=j
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Determining Key Performance Indicators
525
In the assessment process, there may occur a problem in consistency. Therefore, the next step consists of examining the consistency of judgments in pairwise comparisons. Saaty (1988) provided suggestions to check and fix possible inconsistency of judgments. In particular, Saaty (1980) introduced the consistency index (CI) and consistency ratio (CR), which are defined as λmax − n n−1 CI CR = RI where n is the number of elements being compared in the matrix, and RI is Random Index, the average CI for numerous random entries of same-order reciprocal matrices. Decision makers’ judgements are consistent if CR ≤ 0.1. In case CR > 0.1, decision makers are solicited to revise their judgments in order to obtain a consistent new comparison matrix. Finally, a weight or local priority vector for each comparison matrix can be obtained. It is important to underline that if there are a number of decision makers involved in the evaluation, averages can be used to compute the collective weightings. In particular, Saaty (1980) suggests that to integrate decision makers’ preferences, geometrical averages yield better results. CI =
4.1.3. Step 3 — Supermatrix formation The supermatrix represents the tool by which determining global priorities in a network system characterized by interdependent influences among its elements. The supermatrix concept reminds of the Markov chain process (Saaty, 1996). The supermatrix is a partitioned matrix, where each submatrix is composed by a set of relationships dealing with two levels in the network model. The ANP involves three kinds of supermatrices, i.e., unweighted supermatrix, weighted supermatrix, and limit supermatrix, which are respectively formed one after the other. The unweighted supermatrix contains in the appropriate columns the local priority vectors determined in the previous step. Denoted the clusters of a network system with Ck , K = 1, . . . , n, and with ek1 , ek2 , ekmk , the mk elements contained in each cluster, a standard form of this supermatrix (Saaty, 1996) can be depicted as in Fig. 3. Generally, this supermatrix is rarely stochastic because, in each column, it consists of several eigenvectors which each sums to one, and hence the entire column of the matrix may sum to an integer greater than one. To face this, Saaty (1996) suggests to determine the influence of the clusters on each cluster with respect to the control criterion. This yields an eigenvector of influence of all the clusters on each cluster. The priority of cluster of such an eigenvector is used to weight all the
March 15, 2010
14:45
526
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma
e11 e12 .. . e1 m1 C1 ek2 .. .. . . W= e Ck kmk .. . en1 Cn en 2 .. . enmn
C1 e11 · · · e1m1
···
C1k ek1 · · · ekmk
··· ···
C1n en1 · · · enmn
Wl1
···
Wlk
···
Wln
.. .
.. .
.. .
.. .
.. .
Wk1
···
Wkk
···
Wkn
.. .
.. .
.. .
Wnk
···
Wnn
.. .
Wn1
···
Figure 3. A standard form of supermatrix (Saaty, 1996).
elements in the block of the supermatrix that corresponds to the elements of both the influencing and the influenced cluster. The result is a stochastic supermatrix. This matrix is called weighted supermatrix. The limit supermatrix is obtained raising the weighted supermatrix to the power of 2k +1, where k denotes an arbitrarily large number, until it converges to a limit or to the Cesaro sum. This is because raising a matrix to powers indicates the long-term relative effects of the elements on one another. The limit supermatrix has the same form as the weighted supermatrix, but all of the columns of the limit supermatrix are the same. The limit supermatrix provides a meaningful weight of influence of each factor on every other factor in the decision model wherein all possible interactions are captured in the process of convergence. In fact, values of the limit supermatrix stand for the overall priorities, which embrace the cumulative influence of each element on every other element, with which it interacts (Saaty and Varges, 1998). 4.1.4. Step 4 — Prioritizing and selecting alternatives The values in the column of alternatives of the limit supermatrix show the priority weights of alternatives. If the supermatrix does not cover the whole network and only comprises interrelated components, then additional calculations have to be performed to determine the overall priorities of alternatives. The alternative with the highest overall priority should then be selected. Due to its features, the ANP has been applied to a large variety of decisions, from decision problem concerning a nuclear power plant-licensing (H¨am¨al¨ainen and Sepp¨al¨ainen, 1986), to the assessment of the logistic strategies of an organization (Meade and Sarkis, 1998), from the selection of projects (Lee and Kim, 2001) to
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Determining Key Performance Indicators
527
the quantifying of the “strategic service vision” of a company (Partovi, 2001), and so on. For what it concerns, the use of ANP for addressing topics related to performance measurement and management area, is very limited. Sarkis (2003), revisiting the works of Suwignjo et al. (2000) and Bititci et al. (2001) proposes the use of ANP for quantifying the combined effects of several factors, both tangible and intangible, in organizational performance measures. On the contrary, the AHP had a wider application (Ahsan and Bartlema, 2004; Chan and Lynn, 1991; Chavis et al., 1996; Cheng and Heng, 2001; Heeseok et al., 1995; Saaty and Erdener, 1979). For example, Chavis et al. (1996) propose a formal approach to integrate multiple measurements in a divisional performance evaluation based on the AHP, while Theriou and Maditinos (2007) propose a method based on AHP to connect directly the various performance measures of a Balanced Scorecard with the stated goals and objectives of any firm. Even if we acknowledge that it is not necessarily true that forcing an ANP model always produces better results than using the hierarchies of the AHP, we believe that ANP disentangles the question of the choice of performance indicators better than AHP. In the following, the reasons on the basis of this assumption are argued and the conceptual decision model for selecting performance indicators is described.
4.2. The ANP-Based Model As argued, the ANP allows the disentanglement of a decision problem, taking into account the feedback relationships between decision elements. This is particularly important when we consider the problem of selecting performance indicators, as there are some feedback relationships between criteria and performance indicators as well as among indicators to take into account. Generally, when decision-makers select performance indicators, often they do not consider the dependency of criteria on the available performance indicators and the interdependency among indicators or, at most, consider those dependencies in an implicit way, without the possibility of addressing it through a rigorous approach. This could compromise the quality of the results of the selection. The decision model that we propose provides a more accurate and practicable approach to deal with the selection of performance indicators. The model, based closely on the ANP method, consists of two clusters, named criteria and performance indicators. The criteria included in the model are relevance, reliability, comparability, and understandability. The network model is characterized by two interdependencies among levels: between criteria and indicators, which are represented by two way arrows, and within the same level of analysis of indicators, which is represented by a looped arc. Figure 4 provides a basic picture of the proposed ANP-based decision model.
March 15, 2010
528
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma Criteria Cr.. 1
Cr.. 2
Cr.. 3
…….
Performance indicators PI.. 1
PI.. 2
PI.. 3
…….
Figure 4. The network model for selecting KPIs. Criteria
Performance indicators
Criteria
C
B
Performance indicators
A
D
Figure 5. Supermatrix structure of the ANP-based model for selecting performance indicators.
The network model involves a supermatrix which comprises four matrices: matrices: A and B which represent interdependencies between the two clusters criteria and indicators, the matrix D which represents the interdependence of performance indicators on themselves and, finally, C which is a zero matrix. Figure 5 shows the supermatrix. In the following, a practical application of the ANP-based model is detailed in a series of steps. It refers to the assessment and selection of performance indicators of the manufacturing process of a manufacturer operating within the sofa industry. 5. An Application of the ANP-Based Model for Selecting Performance Indicators of Sofa Manufacturing Process The investigated company is a medium-size manufacturer operating within the Murgia sofa district, in South Italy. The application of the ANP-based model has been justified by the top managers’need to weigh the relative importance of existing manufacturing process performance indicators to identify those indicators capable of providing suitable information for driving and assessing management decisions and actions, i.e., KPIs. The research methodology used for implementing the model included analyses of existing documents, interviews, and targeted focus groups. They have involved managers along with researchers. Especially, researchers acted as facilitators, if necessary, throughout the model implementation.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Determining Key Performance Indicators
529
The ANP-based model has been implemented by several steps described in the following. In particular, the computations related to ANP application have been carried out by the software Superdecisions. 5.1. Step 1 — Model Construction and Problem Structuring The first step has been to construct a model to be evaluated. For this reason, the network model in Fig. 5 has been properly tailored to cope with the practical decision problem, i.e., selecting the “best” set of performance indicators for assessing the manufacturing process of the company. The group of performance indicators to be evaluated has been determined by means of both analysis of documents and interviews carried out with top managers. Table 1 shows the performance indicators and the related provided description. 5.2. Step 2 — Pairwise Comparisons Matrices of Interdependent Component Levels This phase has been carried out during a targeted focus group. Eliciting preferences of various elements of the network model components require a series of pairwise comparisons, based on Saaty’s scale, where managers have compared two components at a time with respect to a “control” criterion. As the network is a feedback network, decision-makers compared indicators for preference with respect to all the criteria, and they also compared the prevalence of criteria for each indicator. Moreover, in the case of the indicators, due to the inner dependence, they compared elements with respect to themselves. Within this illustrative example, the relative importance of the indicators with respect to each specific criterion has firstly been determined. A pairwise comparison matrix has been performed for each of the four criteria for the calculation of the impacts of each of the indicators. A sample of Table 1. Performance indicators (decision variables/alternatives) [Ind.1] [Ind.2] [Ind.3] [Ind.4] [Ind.5] [Ind.6] [Ind.7]
Performance Indicators.
Description Actual leather consumption — Estimated leather consumption (daily) Employees’ expenses/turnover (monthly) Number of claims filed during the process (daily) Number of supplies’ claims (daily) Number of shifts of the delivery dates of orders/planned orders (daily) Working minutes for employee/estimated minutes (daily) Working minutes for department/estimated minutes (daily)
March 15, 2010
14:45
530
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma
question used in this comparison process has been: with respect to “relevance” what is preferred indicator, “Number of supplies’ claims” or “Working minutes for employee/estimated minutes”? Moreover, seven pairwise comparison matrices have been determined for the calculation of the relative impacts of the criteria on a specific indicator. A sample of question used in this comparison process has been: what is a more pronounced or prevalent characteristic of “Actual leather consumption — Estimated leather consumption”: its “relevance,” or its “comparability”? Therefore, to fully describe these two-way relationships, 11 pairwise comparison matrices have been required. In addition, a pairwise comparison has been performed for the calculation of the influence of some of the indicators on the other indicators. To explain these dependencies within the indicators level, two pairwise comparison matrices have been used. A sample question used in this comparison process has been: with respect to “Number of shifts of the delivery dates of orders/planned orders” what is the most influential indicator, “Number of claims occurred during the process” or “Number of supplies’ claims”? Once the pairwise comparisons have been completed, the local priority vector w has been computed as the unique solution to Aw = λw, where λmax is the largest eigenvalue of A, with A pairwise comparison matrix. The consistency of the each pairwise comparison matrix has also been checked. In case of inconsistency, top managers have been invited to revise their judgments. Table 2 shows an example of the performance indicators pairwise comparison matrix within relevance criteria. The last column of the matrix shows the weighted priorities for this matrix. 5.3. Step 3 — Supermatrix Formation The unweighted supermatrix, weighted supermatrix and limit supermatrix have therefore been determined. In particular, the unweighted supermatrix has been formed considering all the weighted priorities for each of the 13 pairwise Table 2. Performance Indicators Pairwise Comparison Matrix for Relevance Criteria and Eigenvector. Relevance
[Ind.1]
[Ind.2]
[Ind.3]
[Ind.4]
[Ind.5]
[Ind.6]
[Ind.7]
Priorities vector
[Ind.1] [Ind.2] [Ind.3] [Ind.4] [Ind.5] [Ind.6] [Ind.7]
1.000 1.000 0.333 0.500 0.333 2.000 2.000
1.000 1.000 1.000 1.000 0.333 1.000 1.000
3.000 1.000 1.000 0.500 1.000 1.000 1.000
2.000 1.000 2.000 1.000 1.000 1.000 1.000
3.000 3.000 1.000 1.000 1.000 2.000 2.000
0.500 1.000 1.000 1.000 0.500 1.000 1.000
0.500 1.000 1.000 1.000 0.500 1.000 1.000
0.1827 0.1549 0.1321 0.1123 0.0808 0.1685 0.1685
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Determining Key Performance Indicators
531
comparison matrices, i.e., matrices A and B of the network model have been built from 11 matrices related to relationships from succeeding components, i.e., criteria and performance indicators. Matrix D of the network model has been built from two matrices related to interdependence among the indicators. The weighted supermatrix has been built considering the clusters to be equally important. Raising the weighted supermatrix to an arbitrarily large number, the convergence of the interdependent relationships has been gained or, in other terms, “long-term” stable weighted values have been achieved. These values appear in the limit supermatrix, which is column-stochastic and represents the final eigenvector. Tables 3 and 4 show, respectively, the unweighted and the limit supermatrix. 5.4. Step 4 — Prioritising and Selecting Alternatives The main results of the ANP application were the overall priorities of the indicators obtained by synthesizing the priorities of the indicators from the entire network. The Table 3.
Cr1 Cr2 Cr3 Cr4 W = Ind1 Ind2 Ind3 Ind4 Ind5 Ind6 Ind7
Unweighted Supermatrix.
Cr1
Cr2
Cr3
Cr4
Ind1
Ind2
Ind3
Ind4
Ind5
Ind6
Ind7
0.000 0.000 0.000 0.000 0.143 0.143 0.143 0.143 0.143 0.143 0.143
0.000 0.000 0.000 0.000 0.183 0.155 0.132 0.112 0.08 0.168 0.168
0.000 0.000 0.000 0.000 0.142 0.142 0.160 0.130 0.142 0.142 0.142
0.000 0.000 0.000 0.000 0.142 0.160 0.142 0.130 0.142 0.142 0.142
0.122 0.269 0.507 0.107 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.106 0.372 0.372 0.150 0.000 0.000 0.000 0.000 0.000 1.000 0.000
0.240 0.288 0.388 0.083 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.241 0.309 0.309 0.142 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.127 0.347 0.383 0.142 0.000 0.000 0.750 0.250 0.000 0.000 0.000
0.121 0.347 0.377 0.155 0.000 1.000 0.000 0.000 0.000 0.000 0.000
0.105 0.333 0.408 0.154 0.000 0.000 0.750 0.000 0.000 0.250 0.000
Table 4.
Cr1 Cr2 Cr3 Cr4 W = Ind1 Ind2 Ind3 Ind4 Ind5 Ind6 Ind7
Limit Supermatrix.
Cr1
Cr2
Cr3
Cr4
Ind1
Ind2
Ind3
Ind4
Ind5
Ind6
Ind7
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 0.132 0.061
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 0.132 0.061
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 0.132 0.061
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 0.132 0.061
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 0.132 0.061
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 1.132 0.061
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 0.132 0.061
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 0.132 0.061
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 0.132 0.061
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 0.132 0.061
0.067 0.129 0.159 0.051 0.063 0.127 0.101 0.058 0.050 0.132 0.061
March 15, 2010
532
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma
Table 5.
Performance Indicators Ranking.
Performance indicators
Priorities
Ranking
[Ind.1] [Ind.2] [Ind.3] [Ind.4] [Ind.5] [Ind.6] [Ind.7]
0.0631 0.1267 0.1013 0.0579 0.0500 0.1323 0.0613
4 2 3 6 7 1 5
priorities for all the indicators can be read from any column of the limit supermatrix. Table 5 shows the ranking of the performance indicators identified to measure the production process effectiveness and efficiency. On the basis of the priorities preference, the following KPIs are turned out: “Working minutes per employee/estimated minutes,” “Employees’ expenses/ turnover,” and “Number of claims occurred during the process.” There was a general consensus of managers around the results derived from decision-making progress. Selected indicators represent the most basic and important dimensions that managers have estimated to be valuable as the basis for tracking future progress and assess the current baseline performance of the process. Obviously the appropriateness, over time, of this set of performance indicators will depend upon how the manufacturing process evolves as well as internal and external stakeholders’ information needs change. 6. Conclusions and Outlook This chapter describes a multicriteria evaluation model, based on the ANP, capable of guiding managers in the selection of the most meaningful performance indicators from among those defined in an initial list of potential indicators. The ANP-based model takes into consideration that the priorities of performance indicators depend on both a set of important criteria and feedback relationships between the criteria and performance indicators as well as among indicators. This is an issue that decision-makers often consider when choosing performance indicators in an implicit way without the possibility of addressing it through a rigorous approach. The use of ANP guides decision-makers toward the best choice in a rigorous way that matches common sense about the above-mentioned dependency. The criteria that were used in the model focus on the requests of quality and usefulness of the information embedded in performance indicators. However, for future
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Determining Key Performance Indicators
533
studies, a variation of either the criteria or dependencies in the proposed network model can also be made to better fit the organization that applies this model and/or to specific aspects of the decision problem. In fact, the criteria are guidelines to establish a preference among the indicators that fit the needs and circumstances of an organization system. In this regard, particular attention should be paid in enhancing the model concerning the increase of the number of pairwise comparison questions required for an evaluation. From an operative perspective, even if the use of software and group decision support systems lower the barriers when implementing ANP, it is always important to take into account the time ANP takes to obtain the results, the effort involved in making the judgments, and the relevance as well as accuracy of the results. In particular, the application of the model has revealed that it is particularly important to maintain decision makers’ capability to express judgments on the basis of a holistic view of the decision problem, limiting the number of comparison questions. Put into practice, the ANP method may appear complicated and timeconsuming. However, ANP is a valuable tool to management as it allows for participative inputs, obtained from multiple evaluators, by setting the priorities for a panel of performance indicators, by bringing diverse belief systems together in a consistent and organized way. This is particularly valuable in selecting performance indicators for all the performance dimensions of company, by comparing indicators of many departments. About the results of the ANP application, it seems important to stress that, as for any decision models, the final values that are determined should be critically analyzed. Obviously, when managers make decisions based on the priorities and importance with which they have had experience, the results of ANP are particularly reliable. Additionally, with reference to the analyzed case example, to support continuous improvement, a periodic reconsideration of the selected indicators should be performed. A further suggestion for future studies concerns the consideration of fuzziness of decision makers’ judgments. The proposed ANP-based model ignores fuzziness; therefore, a further development of the research should be related to improve the model by introducing the concept of fuzzy sets. The fuzzy extension should allow to address the issue of subjectivity, particularly the fuzziness of judgment. Finally, the model is seen as open for future extension and development, especially on the basis of the results of a more widespread use in several cases. In conclusion, this chapter describes a model that can be used either for the selection or justification of a set of performance indicators and provides an exploratory evaluation of an analytical approach for managerial decision making in relation to performance measurement.
March 15, 2010
534
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma
References Accounting Standard Board (1991). Qualitative Characteristic of Financial Information, London: ASB. Ahsan, MK and J Bartlema (2004). Monitoring healthcare performance by analytic hierarchy process: a developing-country perspective. International Transactions in Operational Research, 11(4), 465–478. Ballou, D, R Wang, H Pazer and GK Tayi (1998). Modelling information manufacturing systems to determine information product quality. Management Science, 44(4), 462–484. Bititci, US, P Suwignjo and AS Carrie (2001). Strategy Management through quantitative performance measurement systems. International Journal of Productions Economics, 69(1), 15–22. Chan,YL and BE Lynn (1991). Performance Evaluation and the Analytic Hierarchy Process. The Journal of Management Accounting Research, 3, 57–87. Chavis, B, TW Lin and C Ko (1996). Using Multiple Criteria Decision Support Software To Teach Divisional Performance Evaluation. Journal of Accounting and Computers, XII, 11–19. Cheng, EWL and L Heng (2001). Analytic hierarchy process: an approach to determine measures for business performance. Measuring Business Excellence, 5(3), 30–37. Erdogmusa, S, M Kapanoglub and E Koc (2005). Evaluating high-tech alternatives by using analytic network process with BOCR and multiactors, Evaluation and Program Planning, 28(4), 391–399. European Foundation for Quality Management (1999). EFQM Excellence Model. Financial Accounting Standards Board (1980). Qualitative characteristics of accounting information, Statement of financial accounting concepts No. 2, May, Stamford: Financial Accounting Standards. H¨am¨al¨ainen, RP and TO Sepp¨al¨ainen (1986). The analytic network process in energy policy planning. Socio-Economic Planning Sciences, 20(6), 399–405. Heeseok, L, KWikil and H Ingoo (1995). Developing a business performance evaluation system: An analytic hierarchical model. The Engineering Economist, 40(4), 343–357. Holzer, M (1989). Public service: present problems, future prospects, International Journal of Public Administration, 12(4), 585–593. Kaplan, RS and DP Norton (1996). The Balanced Scorecard — Translating Strategy Translating Strategy into Action, Boston, MA: Harvard Business School Press. Kaplan, RS and DP Norton (2000). Having Trouble with Your Strategy? Then Map It. Harvard Business Review, 78(5), 167–176. Lange, I, O Schneider, M Schnetzler and L Jones (2007). Understanding the Interdependences Among Performance Indicators in the Domain of Industrial Services. In Advances In Production Management Systems, J Olhager and F Persson (eds.), pp. 379–386. Boston: Springer. Lee JW and SH Kim (2001). An integrated approach for interdependent information system project selection. International Journal of Project Management, 19(2), 111–118. MBQA (2007). Criteria for Performance Excellence. http://www.quality.nist.gov/ PDFfiles/2007 Business Nonprofit Criteria.pdf Meade, LM and A Presley (2002). R and D project selection using the analytic network process, IEEE Transactions on Engineering Management, 49(1), 59–66.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
Determining Key Performance Indicators
535
Meade, L and J Sarkis (1998). Strategic analysis of logistics and supply chain management systems using the analytical network process. Transportation Research Part E: Logistics and Transportation Review, 34(3), 201–215. Meade, LM and J Sarkis (1999). Analyzing organizational project alternatives for agile manufacturing processes — an analytical network approach. International Journal of Production Research, 37(2), 241–261. Neely, AD (1998a). Performance Measurement: Why, What and How. London: Economist Books. Neely, AD (1998b). Three models of measurement: theory and practice. International Journal of Business Performance Management, 1(1), 47–64. Neely, AD, C Adams and M Kennerley (2000). The Performance Prism: The Scorecard for Measuring and Managing Business Success. London: Financial Times (Virgule) Prentice Hall. Neely, AD (2005). The evolution of performance measurement research: developments in the last decade and a research agenda for the next. International Journal of Operations & Production Management, 25(12), 1264–1277. Neely, AD and J Wilson (1992). Measuring product goal congruence: An exploratory study. International Journal of Operations and Production Management, 12(4), 45–52. Neely, AD, M Gregory and K Platts (1995). Performance measurement system design. International Journal of Operations & Production Management, 15(4), 80–116. Niven, PR (2006). Balanced Scorecard Step-by-Step Maximizing Performance and Maintaining Results. Hoboken, NJ: John Wiley & Sons. Partovi, FY (2001). An analytic model to quantify strategic service vision. International Journal of Service Industry Management, 12(5), 476–499. Saaty, TL (1980). The Analytic Hierarchy Process. NewYork: McGraw-Hill Company. Saaty, TL (1988). Decision Making: The Analytic Network Process. Pittsburg: RWS Publications. Saaty, TL (1996). Decision Making with Dependence and Feedback: The Analytic Network Process. Pittsburgh: RWS Publications. Saaty, TL (2004). Rank from comparisons and from ratings in the analytic hierarchy/network processes. European Journal of Operational Research, 168(2), 557–570. Saaty, T and E Erdener (1979). A New Approach to Performance Measurement — The Analytical Hierarchy Process. Design method and theories, 13(2), 64–72. Saaty, TL and LG Vargas (1998). Diagnosis with dependent symptoms: Bayes theorem and the analytic hierarchy process. Operations Research, 46(4), 491–502. Sarkis, J (2003). Quantitative models for performance measurement systems-alternate considerations. International Journal of Production Economics, 86(1), 81–90. Suwignjo, P, US Bititci and AS Carrie (2000). Quantitative models for performance measurement system. International Journal of Production Economics, 64(1–3), 231–241. Theriou, GN and DI Maditinos (2007). Is the BSC Really a New Performance Measurement System?, In: 5th International Conference on Accounting and Finance in Transition (ICAFT) conference proceedings, London: Greenwich University Press. USAID Center for Development Information and Evaluation (1996). Selecting Performance Indicators. Performance Monitoring and Evaluation TIPS, 6, 1–4. Wang, RY and DM Strong (1996). Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12(4), 5–34.
March 15, 2010
536
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch21
D. Carlucci and G. Schiuma
Biographical Notes Daniela Carlucci is an assistant professor at the University of Basilicata in Italy. She has a PhD from the University of the Republic of San Marino in Business Management and a Master Degree in “SMEs Management.” Since obtaining her PhD, she has been a visiting researcher at the Center for Business Performance, Cranfield School of Management. Actually, she works at the Center for Value Management, University of Basilicata. She is also a visiting professor at Tampere University of Technology, Department of Business Information Management and Logistic and member of New Club of Paris. Daniela’s research, teaching, and consulting focus on knowledge management, knowledge assets, and intellectual capital assessment and management, innovation, business performance measurement and management, decision making in organizations, and decision support methods. She is actively involved in relevant research and consultancy activities as a researcher and has worked in research projects involving national organizations and institutions. Moreover, Daniela is systematically engaged in teaching activities in public and private institutions. She has authored and coauthored several publications, including chapters of books, articles, research reports, and white papers on a range of research topics particularly embracing knowledge assets and intellectual capital management. Daniela is a regular speaker at national and international conferences and author of various academic and practitioner papers. Professor Giovanni Schiuma is a Scientific Director of Center for Value Management, LIEG, at the University of Basilicata, Italy, and a visiting research fellow with the Center for Business Performance at Cranfield School of Management. Giovanni has been researching, teaching, and consulting in the field of knowledge management, intellectual capital assessment and business performance measurement and management since the beginning of 1990. He has authored over 80 books, articles, and white papers on current subjects such as knowledge management, intellectual capital, and performance management. He works with the local Italian Government on performance measurement and is recognized as a leading thinker on knowledge assets and intellectual capital management. Giovanni is actively involved in applied research and has worked in research projects involving national and international organizations, such as Industrie Natuzzi S.p.a., Accenture, Shell, Lloyds TSB, McDonald’s, and so on. He is a regular keynote presenter at international conferences and has teaching and consultancy experience across Europe on knowledge and innovation management as well as on performance measurement and management. He specializes particularly in the design and implementation of knowledge management initiatives and performance measurement systems — including balanced scorecards — designed to support innovative change in organizations.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Part IV Strategic Business Information Systems
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Chapter 22
The Use of Information Technology in Small Industrial Companies in Latin America — The Case of the Interior of S˜ao Paulo, Brazil ´ OTAVIO JOSE´ DE OLIVEIRA∗ and GUILHERME FONTANA† Department of Production Engineering, S˜ao Paulo State University, UNESP, Av. Eng. Luiz Edmundo Carrijo Coube, n. 14-01, Bauru, SP, 17033-360, Brazil ∗ [email protected] † [email protected]
This chapter presents the results and a brief analysis of the results of a survey conducted by members of the Department of Production Engineering at S˜ao Paulo State University (UNESP). The survey analyzed the use and degree of satisfaction with computer resources in small companies in the Bauru region, S˜ao Paulo, Brazil. Sixty-eight organizations registered at the Center of Industries of the State of S˜ao Paulo (Bauru regional office) were surveyed by means of a questionnaire sent out by e-mail. The main focus of the study was in the production and supplies planning and control areas. Analysis of the data and business owner reports revealed the urgent need to develop low-cost computer programs, especially developed for management activities in small companies, providing increased competitive capacity. Keywords: Information technology; information systems; small and medium industries; Latin America; S˜ao Paulo-Brazil.
1. Introduction The growing competitiveness of the global corporate environment has continuously challenged administrators. Growth of the global economy, the quick transformation of industrial society into an information and knowledge society and changes in labor relations, among other factors, are demanding drastic changes in how companies are managed, making the information tool fundamental not only for growth, but also for the survival of organizations. In Brazil’s current situation, highlighted by profound transformations in the productive structure, small companies maintain a very significant presence in relation to job and income generation because they have contributed toward the decentralization of income and have absorbed large migratory contingents. Investments made in information technology (IT) can be as decisive for a company’s profitability as the focus on the corporate business itself. 539
March 15, 2010
540
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
IT plays a fundamental role in any organizational activity in the beginning of this century. This movement is leading to important transformations in the business environment around the world, directly affecting the formulation of strategies and businesses. IT is understood as being the technological set that involves computers, software, public and private electronic communication networks and digital service network, telecommunication technologies, data transmission protocols and other services (Str¨oher, 2003). It is estimated that the use of computers in the small companies, throughout the last 5 years, grew from 30% to 80%, depending on the localization and nature of the business (Prates, 2003). The advance of IT is increasing and there is the necessity to agglutinate information, so that any person can have information whenever necessary (Hoffmann, 2002). Due to the rapid development of knowledge and IT, business environments have become much more complicated. To cope with ensuing complications, enterprises ought to continuously innovate; otherwise, it will be very difficult for them to survive in the marketplace. Hence, many enterprises have applied IT to cut production costs, introduce innovations in products and services, promote growth, develop alliances, lock in customers and suppliers, and create switching costs and raise barriers to entry. In other words, IT can help a firm aiming to gain a competitive advantage. In addition, many studies have argued that business value comes mainly from intangible assets, such as knowledge. Thus, knowledgeable workers will be able to replace clerical workers as the new mainstream of manpower resources, a field in which the development of IT is the major force (Tseng, 2008). The system of information can also assist managers and workers to analyze problems, to visualize complex forms, and to create new products. Under an enterprise approach, the systems of information are organized and managed solutions, based on IT, in response to a challenge presented by the environment of the company. Thus, if a company, when adopting a new methodology of systems development, reaches few errors in programming and, therefore better quality and precision of results, would create increased efficiency. The effectiveness in the development of systems consists of developing the systems that are adjusted to the necessities of the users, the area of business and the company, as well as consistent with the global strategy of the corporation and that the more they contribute to perfecting the activities and the functions for the users and that they bring profits in competitiveness and productivity for the company. A typical case of development effectiveness of a system of information was the implementation of systems online for reservations for tickets from airline companies that provided greater profits from competitiveness for the companies and changed how these companies operate (O’Brien, 2004). It is understood that information are facts and, taken alone, do not have direction. However, when added and analyzed in a context, and following some logic, they compose the required information. In Fig. 1, it has a representation of this description.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 541
Figure 1.
Information processing chart. (Adapted from Andreza and Dorival, 2002.)
On the other hand, the information is characterized as important and indispensable resource for the company and its administration. With it, the uncertainties are reduced and make more efficient decisions possible, mainly, more quality. Unquestionably, the success of an organization depends basically on the quality of the decisions made by its administrators. The useful information must have the following characteristics: • • • • •
Adaptable to the specific type of task to be performed Adjustable to time Have accuracy Understandable to different people, and Directed to the person who will take the action
As organizations are currently immersed in a highly competitive, globalized, and turbulent market, they need timely information and personalized knowledge to effectively help them manage both intelligently and efficiently (Rezende, 2002). Small companies play a fundamental role in the Brazilian economy because they represent 98% of the country’s companies, employ 67% of the labor, make up 62% of exporting companies, and contribute with 20% of gross domestic product. Its main characteristics are: make low unit price products and services available; most sales are to the final consumer; satisfy basic needs of the population (clothing, footwear, furniture, housing, construction, and remodeling); have very low production scales; and use public domain technologies (Sebrae, 2003). They make an impressive contribution toward the national development process, generating opportunities that absorb a large portion of the workforce and encouraging business development as a whole. In Brazil, between 1998 and 2004, 96 out of 100 new jobs were created in micro- and small businesses (Rais, 2004; Vieira, 2002). The Bauru region is located in the midwest of the state of S˜ao Paulo (the largest state in Brazil in economic terms) and it has great political and economic influence in the interior of the state. It has approximately 350,000 inhabitants and a diversified industrial park with about 500 formal industries, 207 of which are
March 15, 2010
542
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
registered at Center of Industries for the State of S˜ao Paulo (CIESP), and 107 of which are classified as small businesses. Bauru is ranked 145th among the 5562 townships in Brazil in exports, totaling US$ 82,452,538.00 from January to May 2008. When considering only the state of S˜ao Paulo, the city is ranked 49th among the 645 existing townships. The main destinations for these foreign trade products are Bolivia, South Africa, Saudi Arabia, Paraguay, the Philippines, Argentina, the United States, Spain, Uruguay and Lebanon. Bolivia alone consumes 33% of Bauru’s production. During the first quarter of 2008, imports totaled US$ 2,688,657.00. Among Bauru’s main international suppliers, those that stand out are Argentina (19.02%), followed by the United States (13.84%), China (11.10%), and Germany (10.44%) (Ottoboni, 2008). This chapter presents the results and a brief analysis of the results of a survey conducted by members of the Department of Production Engineering at S˜ao Paulo State University (UNESP). The survey analyzed the use and degree of satisfaction with computer resources in small companies in the Bauru region, S˜ao Paulo, Brazil. Sixty-eight organizations registered at the Center of Industries of the State of S˜ao Paulo (Bauru regional office) were surveyed by means of a questionnaire sent by e-mail. The main focus of the study was in the production and supplies planning and control areas. A short theoretical revision will follow and then we will present and discuss the main results from the survey. 2. Informatics Informatics is extensively used to represent the use of computer resources including IT and information systems (IS) in the most diverse areas, linked or not to the organizations. The term informatics was used for the first time by Nora and Minc in a report published in 1978. In this report, the impact IT could have on the French economy were presented and general aspects were discussed to dimension and plan its dissemination throughout Europe. The report emphasizes much more than the simple advances in the use of computers in diverse sectors and activities. It also highlights the idea of the transformation of society into an “information society,” that is, informatics leads to a technological as well as a cultural change (Anderson, 2002). Informatics directly implies the use of computers and other IT resources in processes previously performed manually or mechanically without help from informatics. However, it is worth recalling that the expression, informatics, also implies a continuous extension (scope of computerized activities) and intensification (quantity and quality of this use) process. Therefore, this is a systematic, gradual, and growing application process for IT planning and use in every function of the organization (Weissbach, 2003).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 543
2.1. Informatics in Society, Organizations, and Individuals The transformation through which the world is currently undergoing has been portrayed by many authors as going from an industrial society to and information or knowledge society. Information and knowledge gradually become the most important generators of value and wealth with equal or greater importance than the traditional means of production (natural resources, capital, and equipment). At the core of this issue is the question of IT propagation at every level of society. One of the main aspects linked to it refers to the need for technological infrastructure. The lack of infrastructure technologies like the highway-railroad network, electric power transmission grid, data transmission, etc. can jeopardize the competitive positioning of a country as a whole, and consequently, its production sectors can suffer greatly (Carr, 2003). The use of IT in companies is increasingly more related to operational, management and planning activities, and those activities related to relationships with customers, vendors, government, and consumers. IT provides companies a new way to carry out their activities, permitting the development of products, services, organizational forms, providing access to new markets and enabling more innovative manners to deliver better services in less time. For individuals, there is a growing need for knowledge and skills linked to informatics, and consequently IT, in both professional as well as personal activities. Practically every company, even the smaller ones, already uses some sort of informatics resource and this demands knowledge and skills in these resources. Even everyday and personal activities reveal intense contact with informatics and IT resources, for example, in using bank services, obtaining information, and news over the Internet, etc. 2.2. Digital Company Digital companies are those where practically every business process and relationship with partners, customers, and employees is carried out digitally and where the main corporate resources are managed electronically by digital means (Laudon and Laudon, 2001). Lucas (2002) introduces the “Technology Form Organization” concept. He says an organization intensively employs IT to become highly efficient and effective and those that make intensive use of IT have the following characteristics: • “Flat” organizational structure due to the intense use of collaboration tools • High level of task delegation and trust between subordinates and managers • Technological infrastructure comprised of internally and externally connected computers and networks • Strong and empowered IT management • Minimal use of paper in all activities
March 15, 2010
14:45
544
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
• Formation of temporary task forces for projects involving elements of the organization itself, vendors, and customers • Use of remote labor Digital companies perceive gains in terms of reliability, performance, and costs after developing, installing, and managing the IT resources. However, actually obtaining these benefits depends on the coherent application of concepts presented in corporate business processes, respecting the operational knowledge of the work force (labor), and the current culture in the organization. It must be considered that the more spread out the resources of an organization, the more increased the probability of attainment of results in its individual processes and the interconnection between them, and thus in the company as a whole.
2.3. Advantages and Disadvantages of Informatics in Small Companies Informatics in small companies makes possible that these same companies improve the collection and treatment of the information. This represents an increase in efficiency and improves its competitiveness, thus increasing its profitability (Santos, 1998; Zimmerer and Scarborough, 1994). According to Zimmerer and Scarborough (1994), some of the advantages that it can point out are described as follows: • • • • • • • •
It makes more information available for decision-making. It allows automation of routine tasks. It improves the internal control of operations. It improves customer assistance. It increases the capacity to recognize problems earlier. It helps the manager to test some decisions before putting them into practice. It improves the production process. It increases productivity and competitiveness.
Informatics brings many advantages for these companies, however this process also can bring disadvantages (Zimmerer and Scarborough, 1994), and some are described as follows: • Increased cost for the financial availability of the Small and Medium Enterprises (SMEs) • Rapid equipment and software obsolescence • Difficulty in providing the system with the correct information • Impersonal treatment that the computer can cause for customer relations • Reluctance of employees from fear of losing their job • Risks to the health of the employees who use the computer for many hours
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 545
3. Small Companies A small company is very important for the Brazilian economy, even deserving special attention in the country’s constitution, as can be seen in Article 179: The Union, States, Federal District and Townships will provide microcompanies and small companies, as defined by law, differentiated legal treatment aimed at encouraging them through the simplification of their administrative, tax, social security and credit obligations or the elimination or reduction of these by law. (Constitui¸ca˜ o da Rep´ublica Federativa do Brasil, 1988).
There are several ways to classify companies in relation to their size: number of employees, net equity, earnings, etc. The effort to characterize company size mainly results from the need to apply incentives that help strengthen and expand them. To dilute distortions, it is important for the classification system to be in agreement with the objective it seeks, such as promotion or research, for example, and to take into account national, regional, and sectorial differences of the group of companies it intends to classify (Gasperini, 2000). From an academic point of view, the definition and establishment of common references (standards), including size classifications, are fundamental for carrying out comparative studies and analyses between companies. These classifications permit reducing the spectrum of analysis and make it possible to compare companies with more similar characteristics, which brings considerable benefits by virtue of greater homogenization of the studied “universe.” In Brazil, a small company must have 10–49 employees in commerce and services, and 20–99 workers in industry, with gross annual earnings between U$ 147,000.00 and U$ 723,000.00. The participation of small companies in Brazil’s gross domestic product represents only 20%, whereas in more developed countries this share exceeds 50%. In conjunction with the microcompanies, the small ones possess a significant participation in the Brazilian economy and that of other countries, accounting, in Brazil, for 99.2% of the formal companies, with 57.2% of the jobs and 26% of the wages (SEBRAE, 2005). The data from Table 1 demonstrate, in accordance with the size, the contribution of the companies in the Brazilian context — it is noted that the data are only for formal companies. It has a representative amount of controllers of small companies who do not have the perception of the necessity and of the benefits that IT is capable of providing its organization. A research carried out for the SEBRAE (Brazilian service of support to the micro- and small companies) in the 2003 sample that only 47% of the MPEs (micro- and small companies) use microcomputers, while 53% do not use. When asked what were the reasons for not using the microcomputers, 64% answered that they did not see a necessity or benefits from its use, 44% said that it required a high investment, 10% did not know how to use a computer, 6% did not possess qualified employees and 2% had answered other reasons. This research was carried
March 15, 2010
546
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
Table 1.
Company Contribution in Brazilian Economy in Accordance with the Size.
Size
Number of companies
Participation (%)
Total of busy people
Participation (%) (wage bill)
Microcompany Small company Middle company Large company
4,605,607 274,009 23,652 15,102
93.6 5.6 0.5 0.3
9,967,201 5,879,795 2,700,130 9,104,475
10.3 15.7 12.7 61.3
Source: SEBRAE (2005).
out with 585 MPEs that did not possess microcomputers and gave multiple answers (SEBRAE, 2003). Besides a small number of employees, small Brazilian companies also have the following characteristics (Oliveira, 2006): • Simple organizational structure with few hierarchical levels and a great concentration of authority. • Occupy a well-defined space in the market in which they operate. • Have location flexibility, spreading throughout the country and playing an important role in developing the interior. • Have greater work intensity. • The owner and administration are highly interdependent. In other words, in general, there is no difference between private and business issues because it is common for the business owner to use the same bank account as his company. • There is an absolute predominance of domestic private capital. • In general, it belongs to an individual or a small group of people. • It is administered by the business owner(s) in an independent manner, and even when professionalized, they remain as the main decision makers. • Their capital is basically financed by the business owners. • The operations area is generally limited to their location, or at most the region where they are located. • Their productive activity does not occupy a prominent position or predominate in the market. In general, these characteristics make the development and use of IT resources in small companies more difficult. 4. Scientific Methodology of Research For this, research survey methodology was used. The survey analyzed the use and degree of satisfaction with computer resources in small companies in the Bauru region, S˜ao Paulo, Brazil. Sixty-eight organizations registered at the Center of
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 547
Industries of the State of S˜ao Paulo (Bauru regional office) were surveyed by means of a questionnaire sent by e-mail. There were great advances in the techniques and technologies utilized in survey research, from systematic sampling methods to enhanced questionnaire design and computerized data analysis. The field of survey research became much more scientific (Evans and Mathur, 2005). Data collection using traditional surveys can be time-consuming from a planning, design, and testing perspective. Moreover, even the most meticulously orchestrated survey can lead to researcher frustration when data collection is nonrepresentative and/or minimalist due to a poor survey response rate. In general, it has been shown that response rates to mail questionnaires can be poor where the topic is of low interest to the population being surveyed, whilst questionnaires that are too long have also been shown to affect returns (Sellitto, 2006). We consider the response rates of 63.55% in this research were satisfactory and trustworthy. 5. Presentation and Analysis of Survey Results Data collected from 68 small companies in the industrial segment from the region of Bauru, SP will be presented in this section. 5.1. General Aspects of the Small Companies Studied The survey tried to map available IT resources in the studied companies (network security, types and quantities of computer equipment, electronic business, Web site, online sales, online purchases, intranet, etc.) and how they are managed. It also tried to verify the degree of use, direct user satisfaction and results from applying IT in commercial, production, stock, supplies, finance and administration areas/processes in general. The surveyed companies are distributed over several branches of activity. This is explained by the city of Bauru’s strong presence in the national economic scene as well as the representativeness of its exports. Figure 2 shows the distribution of surveyed companies by sector of operation. All of the surveyed organizations have access to broadband Internet and the decision on investments in IT is concentrated in the hands of the owners. More than 50% of the companies control electronic access (intranet and Internet) using log-ins and passwords, all make use of protection software (antivirus) and more than half back-up data regularly. Here, we see a great deficiency in how these surveyed companies manage their information systems, because the lack of regular backups puts important information at risk, which, if lost, cannot be easily recovered and will cause serious problems. Outsourcing is defined as the transfer of activities to specialized vendors, holders of their own and modern technology, who have the outsourced activity as their
March 15, 2010
14:45
548
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
Figure 2. Table 2.
Company activity profile.
Informatics Services Outsourced by Surveyed Companies.
Activity Equipment maintenance Server and network configuration Systems development User support User training
Do not outsource (%)
Partially outsource (%)
Fully outsource (%)
12.0 12.0 52.0 38.0 24.0
12.0
76.0 88.0 48.0 50.0 52.0
12.0 24.0
core activity, freeing up the contracting party to concentrate its managerial efforts on its main business, permitting improvements in quality and productivity, cost reductions, and increased competitiveness (SEBRAE, 2003). At present, we see an intense transfer of informatics services to third parties. This is a consequence of a search for resource optimization and an increase in the quality of services provided. Table 2 shows the main services surveyed companies’ outsource. We see that IT services are in the most part outsourced in the sample of surveyed companies. This was expected, not only due to the pro-outsourcing movement in informatics nowadays, but also mainly because of the companies are small and therefore have to concentrate their few resources in their core activity. Figure 3 shows the degree of satisfaction the business owners have with outsourced informatics services. The degree of satisfaction with the following items was measured: equipment maintenance, server and network configuration, systems development, user support, and user training.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 549
Figure 3. services.
Degree of satisfaction business owners have with outsourced informatics
The business owners are moderately satisfied with the outsourced services, although many companies feel their partners could improve service in some aspects, especially in relation to service execution time, as the surveyed companies have restricted financial resources and therefore do not have enough extra equipment and in some cases need to stop their activities. Of the evaluated items, “support and users” obtained the highest score. This is partially explained by the possibility outsourced companies have to provide this service remotely (by phone, for example), which greatly speeds up service. 5.2. Use of IT in Production and Stock Planning and Control Planning has been the basic management concept since the publishing of Taylor and Fayol’s “Modern Administration” at the beginning of 20th century. Although it has known an ascension in its importance with the involvement of the strategy, it was with the strategy that it had an increase in critics and a fall in popularity. Sharing space with the critics of rationalism (authors of human relations and behaviorism), the defenders of the concept have kept considerable space in academic publications and the corporate media. In such a way, despite having opponents, the planning is a relevant subject for the administrators (Andrade, 2007). To plan means that the administrators think about anticipating its objectives and action, and that its actions are based on some method, plan or logic, and not on guesswork. (Andrade, 2007). Terence (2002) records the following characteristics that form the planning concept:
March 15, 2010
550
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
• It is the definition of a desired future and efficient ways to reach it. • It is something that is made before acting, that is, it is taken by an anticipated decision and one becomes necessary when the achievement of a desired future state involves a set of interdependent decisions and action so that it occurs. • It means the organization’s development of a program for the accomplishment of objectives and goals, involving the choice of a course of action, the anticipated decision of what must be made and the determination of when the action must be carried through. • It is the process to establish objectives and the courses of action implemented to reach them. During the 1950s, industry recognized the importance of computers to improve productivity and began to use them although in an incipient manner.Years later, they were already being applied in scientific and commercial processing, although only for specific purposes. But decades later, they began to be used in a general manner, especially in business organizations. The informatics process has been broadly defined and studied and consists of three main stages (Rodriguez and Ferrante, 2000): • Structuring: Promote the organization and rationalization of administrative processes; prior organization regardless of the computer aspect. • Automation: Computer technology installation process; acquisition of software, packages, and applications. • Integration: Process of maximizing resource use in search of minimizing operational costs; the systems that used to function in isolation are now integrated. From these stages, it is possible to see the narrow interrelationship between informatics processes and IT, in a general sense, production control and planning (PCP) activities. PCP requires the conciliation of production and demand in terms of volume, time, and quality. To conciliate volume and time, three different activities are performed (Slack et al., 2002): • Loading: This determines the volume a productive operation can handle. This is the amount of work allocated to a production center. • Sequence: This determines the priorities of the tasks that will be carried out. • Programming: Time decision from the beginning to the end of each task. In Fig. 4, a schematic representation is shown of the ample functionality and complex structure and interrelationships involved in the PCP process. From these steps it is possible to check the close interrelationship of the processes of computerization and IT, in general, with the activities of planning and control of production.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 551
Figure 4.
Schematic representation of the PCP. (Source: Moura Junior, 2001.)
It is possible to see that “stock control” is mainly done using the company’s own system, with little use of the Office package. This is probably due to the low adjustment of this package to specific stock management activities. Another piece of alarming information is that 25% of the surveyed companies control their stock manually. On the other hand, this situation indicates a great chance for optimizing this process using informatics. In the “structure registry or product composition” item, we see that more than half of the companies use other software, that is, very few do this manually or use systems developed by the company itself. Curiously, 25% of the companies indicated that they use Office for this activity, but they gave no details about how. In relation to “production planning,” the answers indicated great concentration in the manual development of this activity, followed by development using the surveyed company’s own software. Although this response standard seems strange, it can be understood as a result of the far from complex planning system at these companies, which are small due to low production volume. However, even for them, informatics resources prove to be promising tools for reducing time and labor and for increasing projection precision. In “control of production orders,” it is possible to observe a more uniform distribution of answers, with a slight concentration in the use of systems developed by the company itself. However, this distribution can be explained in part by the low specificity and complexity of this task in small companies, where it can be satisfied by more popular software or by simple, custom-made systems. The same logic can be applied to “control of production costs.” “Quality control” activities, as demonstrated by survey results, are mainly performed using support from the Office package. This is the same with other analyses.
March 15, 2010
552
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
When you take into account that this software was not specifically developed to support this type of activity, we believe that the development and use of more appropriate IT tools could increase the competitiveness of such companies and even contribute toward regional development in industrial and economic terms, as the products would be more reliable and have more quality. As very few companies in the surveyed sample have ISO 9001 certification (7.35%), it was expected that activities related to “equipment control and maintenance” would not be systematically carried out, and that is what the survey revealed. Twenty-five percent of the companies make no formal control of equipment maintenance and nearly 40% use Office to do so. Figure 5 shows a derivation of Table 3 representing the average distribution of items evaluated in relation to PCP. In general, we can conclude that the small-surveyed companies opt for using Office to the detriment of more specific software. Several factors can explain this fact, such as resource limitations, lack of technical knowledge among those who run the business, etc. However, this situation makes it possible to project a substantial increase in these companies’ competitiveness if a joint action is carried out involving the local government, business owner associations, unions, etc. to develop a collective plan for informatics. During the survey, the company was also asked about the degree of contribution and benefits from IT resources for some of its processes and activities, as it can be seen from Table 3. A macroanalysis of these data shows that user perception of IT contributions to PCP processes is average. It is important to underscore that 63.2% of those surveyed believe IT’s contribution to the sales process is very low. This is partially explained by the fact that
Figure 5. Average use of informatics resources in PCP activities.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 553
Table 3.
Degree of Contribution of IT Resources with an Emphasis on PCP. IT contributions Activity
Low (%)
Average (%)
High (%)
Improve the quality of processes Reduce the cost of new products Improve the product quality Improve the production planning Increase the company sales Reduce the company costs Improve the product quality Offer the differentiated products and services Reduce the time needed to meet order
36.8 38.2 11.8 50 63.2 38.2 25 25 38.2
38.2 50 50 25 25 50 50 55 38.2
25 11.8 38.2 25 11.8 11.8 25 25 23.6
Average
36.8
38.2
25
Figure 6. Average contribution of informatics resources for the company from a PCP perspective.
these companies have low production volumes, as they are small, and therefore their sales processes are still being carried out without informatics resources. User perception is also highlighted in relation to IT’s contribution to improvements in product quality, which saw the highest positive contribution, 38.2%. That is because informatics resources make significant contributions toward monitoring and controlling quality indicators, help track finished and semi-finished products, facilitate document controls, and improve the distribution and consultation of work instructions. Figure 6 shows the average distribution of results obtained taking into account those items related to the contribution of informatics resources to support PCP for studied companies. Observe the predominance of average and low contribution levels. This situation is mainly associated with the limited financial resources of surveyed companies. They had to centralize many of these activities in Office, which is a package that is unable to meet the demands required for carrying out PCP tasks.
March 15, 2010
554
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
5.3. Use of IT in Supply Management Each time, the success of the individual organization seems to be related to its ability to compete in playing different roles in the dynamic supply chains and virtually hardwired on a global level, and not for its performance as an isolated and static organization (Gulati et al., 2000; Henriott, 1999). The co-production of a network invites us to rethink the models of strategic management and the inherited organizational structures of the industrial age (Pitassi and Soares, 2001). The technological advances, particularly in IT, allow us to anticipate the competitive advantages offered by the new structures that emerge in the current economic environment (Venkatraman, 1994). Supply management is defined as the control of materials, information, and finances inside of the process that goes from supplier to consumer, passing through the manufacturer, wholesaler, or retailer (Gomes and Ribeiro, 2007). Figure 7 illustrates a representation of how the companies perform through the supply management process. It is emphasized that this figure presents only one supplier and one customer; however, in reality, the companies normally possess some suppliers and consumers. Supply management is a balancing act, where the objective is to become able to deliver what the customers want, when and where they want it and to make it at a minimum cost. To reach these objectives, there must be an exchange between the level of service to the consumer and the cost to supply this service.As a rule, the costs grow as the level of given services grows; therefore, it is necessary to find a combination of input that maximizes the services and minimizes the costs (Arnold, 1999). A competitive strategy developed to capture and to maximize the chances of supply management demands that all the participant companies work in perfect
Figure 7.
Supply-production-distribution systems. (Source: Arnold, 1999.)
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 555
Figure 8.
Material consumption dynamic. (Source: Wanke, 2000.)
harmony, in a dynamic network structure. In this way, the lack of adequate management of the logistic processes, qualified for the exchange of digital information between the partners, implies a reduction of the capacities of coordination and synchronization of the flows of goods and services in these chains that start to add incidental costs, loss of performance, and a lack of confidence by the consumer (Fawcett et al., 1997). Figure 8 shows a representation of the material consumption dynamic, distinguishing between theory and what is actually seen in practice. Wanke (2000) presents two models of consumption, being one idealized, and another basing on the reality. For the author, in the ideal world, the consumption is totally known in advance; therefore, it is possible to foresee with certainty the moment of the end of the supply and when they will have to place new supplies ordered. However, in the real world, the consumption is not totally known; moreover, lead times of the re-supply can vary, causing delays. To prevent unexpected delays, the companies adopt security supplies. The ideal point of view and, according to principles related with the reduction of lots of production and purchase, the necessary material for production would have to be available only at the exact moment that the product is required (Severiano Filho and Lucena, 2001). Supply management is an area in the company that demands special attention as it is directly responsible for acquisitions. Thus, its perfect operation is of fundamental importance and the appropriate informatics tool can be a great help. Table 4 shows the IT resources that small surveyed companies use to perform supply activities. Analyzing “registry of vendors” in Table 3, we see that none of the surveyed companies uses Office for this task. The answers indicated the use of another informatics resource and one-fourth of the companies do not have any sort of vendor registry.
March 15, 2010
556
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
Table 4.
Activity
Informatics Resources Used in Supply Management. Does not do (%)
Does manually (%)
Office package (%)
System created by company (%)
Other informatics resource (%)
Registry of vendors Registry of purchase orders Authorization and purchase orders Sending orders to vendors Elaboration of price quote maps
25 25
11.80 25
13.20
25 11.80
38.20 25
11.80
39.60
11.80
11.80
25
25
25
13.20
11.80
25
11.80
39.60
11.80
11.80
25
Average
19.72
28.20
10.00
14.44
27.64
This is a very alarming situation not from the analysis of informatics tools being used, but rather the 25% that do not have any vendor registry or classification. This is a relatively simple tool, especially for small companies, and it can generate considerable benefits in supply management by establishing more consistent relationship policies with vendors and even the establishing of criteria for vendor classification based on supply history (deadlines, price, flexibility, reliability, etc.). With regard to “registry of purchase orders,” the same situation and concern as above remain in relation to the fact that 25% of surveyed companies do not do it. However, this is worsened by the fact that another 25% of the companies do so manually and only 13.2% use Office. These data reveal that in general this activity is being performed in a very artisanal manner at these companies, or else simply not done at all, thus seriously jeopardizing the speed and reliability of its processes related to the supply activity. In “authorization and purchase orders,” we see that the alternative receiving the most answers was that it is done manually (39.6%), followed by one that indicates the companies use another informatics resource (25%). There is also a great opportunity for gains in time and precision in performing these tasks if they are performed using customized software for these companies. With regard to “sending orders to vendors,” there was a concentrated distribution for the “does not do” and “does manually” alternatives, with 25% for each. Selection of the “does not do” alternative may be linked to two factors: the company has vendors that take the initiative to monitor customer orders or the respondent did not fully understand the question, as the orders must reach the vendor for the supply to be delivered. Unfortunately, the questionnaire did not permit identification of which of these alternatives was indeed the motive.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 557
Figure 9. Average use of informatics resources in supply activities.
The most indicated response for “elaboration of price quote maps” was “does manually,” with nearly 40%. If we consider the e-procurement portals available in the Brazilian market and even the growth of specialized outsource companies for this service, we conclude that it is another problem in need of study that may be solved over the mid-term, thus permitting an increase in competitiveness. Figure 9 shows the compilation of Table 4 items and permits a macroview of the situation involving the use of informatics resources as a support for supply activities. It is important to underscore the predominance of the “does manually” answer in surveyed industrial microcompanies (organizations with up to 19 employees). However, this answer was the most indicated overall, with “other informatics resource” predominating in small companies (from 20 to 99 employees). The survey also verified the degree of contribution, in the users’ opinion, of IT resources toward company competitiveness from the elements related to supply management (Table 5). Analyzing the data from Table 5, it is worth pointing out the following: 50% of the respondents felt the use of IT resources in supply activities at their companies contributes intensely to the “reduction in the lack of parts and raw materials in production” and 50%, similar to the PCP-related item, feel that the resources have very little or no contribution in “increasing company sales.” This last item is deserving of more study in relation to supply management as well as PCP. However, these companies should invest intensively in awareness and training in the importance and use of IT in these activities as an attempt to encourage their employees to use all the resources these tools have and thus increase the effectiveness of activities. Also in Table 5, we see that 50% of the respondents believe IT resources applied to supply management activities moderately collaborate with “improvement in process quality” and the other items obtained a more homogeneous distribution among the three alternatives. Figure 10 shows the average distribution of results obtained taking into account those items related to the contribution of informatics resources in supply management for studied companies.
March 15, 2010
558
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
Table 5. Degree of Contribution of IT Resources with an Emphasis on Supply Management. IT contributions Activity
Low (%)
Average (%)
High (%)
Improve the quality of processes Increase the bargaining power with vendors Determine the best purchase alternatives Reduce the lack of parts and raw materials in production Increase the company sales Improve the quality of products and services Offer the differentiated products and services
25 50 36.8 25
50 11.8 38.2 25
25 38.2 25 50
50 38.2 36.8
38.2 38.2 25
11.8 23.6 38.2
Average
37.42
32.34
30.24
Figure 10. resources.
Degree of user satisfaction with supply management informatics
A homogeneous distribution can be observed in the answers, with a slight concentration in “low contribution.” Anyway, the experience of large corporations permits expectations for promising results should IT use be increased in these companies. Countless processes can be carried out at greater speed and precision and with less labor — elements that will certainly help these companies and the region’s industrial park as a whole. 5.4. IT Use in the Commercial and Financial Area Despite this study’s focus on mapping IT resource use in operational activities at small industrial companies in the Bauru region, we also took the opportunity to verify its use in support areas, such as the commercial and financial.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 559
The survey revealed that companies use IT resources in the commercial area mainly to perform tasks related to managing customer and vendor relationship activities, controlling commercial proposals and accompanying the issuance of orders and deliveries to customers. It was verified that 52.94% of these companies use generic programs like Office to perform these activities, and as they do not need very specific tools, or they are not aware of the other resources available in the market, the users reported considerable satisfaction (Fig. 11). In the financial area, 55.88% of the surveyed companies outsource some part of the services, especially those related to accounting (book-keeping of tax records, withholding taxes, registration of employees, and control of payment receipts). As with the commercial area, a large part of those activities that are not outsourced are carried out using Office (73.33% of non-outsourced activities), especially Excel. Figure 12 shows the results of user satisfaction evaluations regarding IT resources available at their companies to help with financial activities.
Figure 11. area.
Degree of user satisfaction with informatics resources in the commercial
Figure 12.
Degree of user satisfaction with informatics resources in the financial area.
March 15, 2010
560
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
Conclusions The IT process helps in the management of operational as well as strategic business, making the companies more apt to confront the adversities of a market in constant change. This study revealed that the surveyed companies have knowledge of the need to use informatics resources and the benefits obtained from their use; however, they often run into structural and financial limitations. From the reported data, we can see that the size, volume, and complexity of operations indicate the need to use more and better IT in all phases of the decision process, whether in the administrative, operational, or strategic area. A limited view of the business owner regarding the possibilities of informatics resources, even in small companies, jeopardizes efficiency and effectiveness, and consequently competitiveness. Analysis of the results revealed that in general there is dissatisfaction with those informatics resources available at surveyed companies, as there is extensive use of Office, which was not specifically developed to carry out managerial activities and is therefore inappropriate for many activities and unable to fully satisfy users’needs. According to the increase in company size, they were seen to develop their own, more specific systems, consequently achieving better results in their processes as a result of better operational support, thus favoring management and planning processes. Although the data were collected in the Bauru region, they represent an overview of the reality at companies in the Midwest section of the state as Bauru is centrally located and plays an important role in the regional and national economic scenario. Analysis of the data and business owner reports revealed the urgent need to develop low-cost computer programs, especially developed for management activities at small companies, providing increased competitive capacity. But for such an aim, collective effort is needed, working together with government bodies, business associations, and the companies themselves. It is necessary to quickly and seriously think of a structured program to disseminate IT resources in the surveyed region. Acknowledgment We would like to thank FAPESP (Research Support Foundation of the State of S˜ao Paulo) for the financial assistance to carry out this study. References Anderson, WT (2002). Call it what you will, Informatization is its name. The World Paper. <www.worldpaper.com>. Andrade, JH (2007). Planejamento e controle da produ¸ca˜ o na pequena empresa: Estudo de caso de fatores intervenientes no desempenho de um empreendimento metal´urgico da cidade de S˜ao Carlos-SP, Disserta¸ca˜ o de mestrado.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 561
Andreza, SD and F Dorival (2002). Sistemas especialistas nas organiza¸co˜ es. In IX SIMPEP — Simp´osio de Engenharia de Produ¸ca˜ o, Bauru, SP. Arnold, JRT (1999). Administra¸ca˜ o de Materiais. S˜ao Paulo: Atlas. Carr, NG (2003). A TI j´a n˜ao importa. Harvard Business Review. Brazilian Version. Constitui¸ca˜ o da Rep´ublica Federativa do Brasil (1988). Presidˆencia da Rep´ublica — Casa Civil. Bras´ılia. <www.planalto.gov.br/ccivil 03/Constituicao/Constitui%C3%A7ao. htm> [06 July 2006]. Evans, JR and A Mathur (2005). The value of online surveys. Internet Research, 15(2), 195–219. Fawcett, SE, LL Stanley and SR Smith (1997). Developing a logistics capability to improve the performance of international operations. Journal of Business Logistics, 18(2), 101. Furlaneto, EL (2002). Forma¸ca˜ o das estruturas de coordena¸ca˜ o nas cadeias de suprimentos: estudo de caso em cinco empresas ga´uchas. Tese Doutorado — Escola de Administra¸ca˜ o, Universidade Federal do Rio Grande do Sul. Gasperini, V (2000). Pequenas lojas formam cooperativas para fazer frente a` s grandes redes. Revenda Constru¸ca˜ o, S˜ao Paulo, ano XII, n.121, Ago. Gomes, CFS and PCC Ribeiro (2007). Gest˜ao da cadeia de suprimentos integrada a` tecnologia de informa¸ca˜ o. S˜ao Paulo: Thomson. Henriott, LL (1999). Transforming supply chains into e-chains. Supply Chain Management Review, Special Global Supplement, 12–18. Hoffmann, E (2002). Elabora¸ca˜ o e armazenamento de documenta¸ca˜ o de sistemas informatizados. Curso de p´os-gradua¸ca˜ o em gest˜ao da informa¸ca˜ o e inova¸ca˜ o tecnol´ogica. Curitiba: Funda¸ca˜ o de estudos sociais do Paran´a. Laudon, KC and JP Laudon (2001). Gerenciamento de sistemas de informa¸ca˜ o, 3 Ed. Rio de Janeiro: LTC. Lucas, HC (2002). Information Technology for Management, 6 Ed. NewYork: McGraw Hill. Moura Jr, ANC (1996). Novas tecnologias e sistemas de administra¸ca˜ o da produ¸ca˜ o: An´alise do grau de integra¸ca˜ o e informatiza¸ca˜ o nas empresas catarinenses. Production engineering master dissertation, Florian´opolis, UFSC. O’Brien, JA (2004). Sistemas de informa¸co˜ es e as decis˜oes gerenciais na era da internet, 2◦ edi¸ca˜ o. S˜ao Paulo: Saraiva. Oliveira, OJ (2006). Pequena empresa no Brasil: Um estudo de suas caracter´ısticas e perspectivas. In Revista Integra¸ca˜ o, n. 44, 5–16. S˜ao Paulo-Brazil. Ottoboni, G (2008). Ranking estadual de exporta¸co˜ es destaca Pederneiras e Bauru. http://www.jcnet.com.br/busca/busca detalhe2008.php?codigo=132513, Access to 15/06/08. Pitassi, C and TDLVAM Soares (2001). The strategic relevance of information technology for the business to business organization. In Proceedings of the Balas Conference 2001, p. 200. San Diego, CA: University of San Diego. Prates, GA (2003). Tecnologia de informa¸ca˜ o em pequenas empresas: Analisando empresas do interior paulista. In Revista Administra¸ca˜ o, On Line, 04(04), 1–13. S˜ao Paulo. [23 April 2008]. Rais — Relat´orio Anual de Informa¸co˜ es Sociais. Bras´ılia: Minist´erio do trabalho. [18 February 2007]. Rezende,Y (2002). Informa¸ca˜ o para neg´ocios: Os novos agentes do conhecimento e a gest˜ao do capital intelectual. Ciˆencia da Informa¸ca˜ o. Bras´ılia, 31(2).
March 15, 2010
562
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
O. J. de Oliveira and G. Fontana
Rodriguez, M and AJ Ferrante (2000). Tecnologia da Informa¸ca˜ o e Gest˜ao Empresarial, 2 Ed. Rio de Janeiro: E-papers Editor. SEBRAE (2003). A informatiza¸ca˜ o das MPEs paulistas. Relat´orio de Pesquisa. <www.sebraesp.com.br> [15 March 2006]. SEBRAE (2005). Boletim estat´ıstico de micro e pequenas empresas. Observat´orio SEBRAE, 1˚ Semestre de 2005. http://sebrae.com.br/pesquisas [26 May 2007]. Slack, N, S Chambers and R Johnston (2002). Administra¸ca˜ o da Produ¸ca˜ o, 2 Ed. S˜ao Paulo: Atlas. Santos, M (1998). Fora de Foco: Por que boa parte das pequenas empresas n˜ao consegue tirar vantagens efetivas da inform´atica e da informa¸ca˜ o. Pequenas Empresas Grandes Neg´ocios, n.108, ano X, FEB. Sellitto, C (2006). Improving winery survey response rates: Lessons from the Australian wine industry. International Journal of Wine Marketing, 18(2), 150–152. Severiano Filho, C and FO Lucena (2001). A Programa¸ca˜ o de materiais frente a` s necessidades do PCP: Estudo de caso na constru¸ca˜ o civil. Anais. In XXI ENEGEP — Encontro Nacional de Engenharia de Produ¸ca˜ o. Str¨oher, OP (2003). Diagn´ostico do perfil da tecnologia da informa¸ca˜ o nas pequenas empresas do ramo industrial do vale do Iva´ı, norte do Paran´a. Disserta¸ca˜ o de mestrado em Engenharia de Produ¸ca˜ o, Florian´opolis, UFSC. Terence, ACF (2002). Planejamento estrat´egico como ferramenta de competitividade na pequena empresa. Production engineering master dissertation, S˜ao Carlos, EESC-USP. Tseng, SM (2008). The effects of information technology on knowledge management systems. Expert Systems with Applications, 35(1–2), 150–160. [02 September 2008]. Venkatraman, N (1994). IT-enabled business transformation: From automation to business scope redefinition. Sloan Management Review. Vieira, FRC (2002). Dimens˜oes para o diagnostico de uma gest˜ao estrat´egica voltada para o ambiente de empresas de pequeno porte. Production engineering doctorate thesis, Florian´opolis, UFSC. Wanke, P (2000). Gest˜ao de estoques. In Logistica Empresarial: A pespectiva Brasileira, PF Fleury and KF Figueiredo (eds.), S˜ao Paulo: Atlas, 126–132. Weissbach, R (2003). Strategies of organizational informatization and the diffusion of TI. In PM Khosrow (Ed.), Information Technology & Organizations: Trends, Issues, Challenges & Solutions, Hershey: Idea Group Publishing. Zimmerer, TW and NM Scarborough (1994). Essentials of Small Business Management. Macmillan College Publishing Company.
Biographical Notes Ot´avio Jos´e de Oliveira is an assistant Professor of S˜ao Paulo State University (UNESP) and Coordinator of the Production Engineering graduate course. Also, he is a civil engineer, master in business, doctor in civil engineering, and postdoctor in production technology and management. He is an author of several books and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
Use of Information Technology in Small Industrial Companies 563
scientific papers about operations, certified management systems (ISO 9001, ISO 14001 and OHSAS 18001), and information technology. Guilherme Fontana is a researcher of operations management area and owner of information technology company.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch22
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
Chapter 23
Technology: Information, Business, Marketing, and CRM Management FERNANDO M. SERSON Marketing Department, FGV-EAESP, FGV, SP, Brazil Rua Itapeva, 474-9◦ andar, Bela vista S˜ao Paulo, SP, 01304-000, Brazil [email protected] [email protected]
Based on proposals by both established marketing and customer relationship management (CRM) theories, this chapter seeks to present concise and objective analyses, which recommend them, their characteristics, implications, and recommendations to facilitate the work of the managers and prevent them from wasting resources. So, the manager is invited to reflect on the best way to use the concepts proposed to facilitate the work of management. It follows stating that there is no absolute standard and only one way to manage the marketing and CRM in a company, leaving him to reflect and think how to adapt to the market where he operates, as well as what is the most effective way of using these theories. Keywords: Marketing; relationship; CRM and data management.
1. Introduction What are the reasons that lead a customer to leave a company or a brand or a business? This can be answered, based on a 2005 research, which surprisingly concluded that the reason more important than price or product/service quality which has guided customers was the lack of service and/or attention given to customers (Fig. 1). In a context of a highly integrated economy, with companies and organizations no longer serving only multinationals markets but global ones, with many advances, especially scientific and technological, should those companies not ask themselves how and why in fact, they ought to do to achieve and maintain the success in these conditions? Certainly, the path of planning and management of those companies, whatever they act only locally or globally, depends on the use and exploitation of available technology in line with its practice in terms of marketing and relationships.
565
March 15, 2010
14:45
566
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
F. M. Serson Customer Service Quality
Price
Other Client vision (n=300) Company vision (n=369) Functionallity
Convenience
Had to change
0%
10%
20%
30%
40%
50%
60%
70%
80%
Figure 1. Why does a company loose a client? (From Thompson, B. The loyalty connection: Secrets to customer retention and increased profits. CRM Guru. March, 2005. Available at: http://www.techworld.com/cmsdata/whitepapers/4206/The% 20Loyalty% 20Connection% 20-% 20Cust% 20retention.pdf. Accessed on: 12 Oct. 2007.)
Thus, this chapter is based upon the concepts, definitions, and implications for relationship marketing and also seeks to establish the links between customer relationship management (CRM) strategy, the appropriate use of this technology and the possibilities it provides, in terms of registration, access, dissemination, and use of information. So, the proposal is based on what can be considered as the best practices of marketing and relationship of a given organization. So, it will be possible to establish whether there is, and what is the relationship of CRM plans and strategies to the technology available, according to the idea that technology enables and facilitates the successful achievement of the goals and business strategies. CRM is a term which has become fashionable among managers from different areas and different companies in the mid-1990s. Facilitated by the gradual reduction of prices under the development and spread of technology, we must note that the practice of CRM is older than commonly think. For example, we may imagine a small store in the rural area of a given city or county, at the beginning of the 20th century, where the owner not only sold on credit to their customers, those residents, and visitors of their village or township, but also maintained a “book of records,” where all the items were recorded and how much each of its customers had acquired, the habits of each customer, and also where this supplier kept all the data about his clients and his merchandise, who and when
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
Technology: Information, Business, Marketing, and CRM Management 567
and what someone bought. Every time there was a new purchase, it was kept in the owner’s book of records. If there was a new product in the shop, this man could check his files (book of records) and could contact a customer, to show or suggest the purchase of the novelty. Despite not having the consciousness, he “intuitively” was practicing what is today is known as CRM. The CRM, anachronistic, represents a strategy of relationship with the client. By itself, like any strategy to be adopted by an organization, does not solve anything. The desire to implement such a tool will not remedy all the possible problems of a company. It is extremely important and useful you have a clearly defined aim before implementing such a strategy. This should be simple and feasible. Issues such as want to increase sales? How much? Want a more efficient support? What is the maximum rate of complaints allowed? Would greater degree of satisfaction of customer? All these together? There are no problems, if they are to adopt a strategy that is clear and specific, and also the to recognize that the more complex be the goal, the more will probably be the cost of implementation and the time until they have the desired return. For the success of its implementation, the CRM requires the commitment of all employees of the company. There may be an idea or strategy for an area or department alone. Still, it is necessary that they have a clear idea of what the priorities are and what workflow and information to be used in the model of work. It is directly proportional to the successful implementation of CRM: the clarity with which information and data are collected, stored, accessed, and distributed by the participants of the development process and implementation. As for technology, which is an important element of the strategy, it must provide flexibility to grow and go, adapting to new market conditions for businesses. Concluding the CRM is not presented as a recipe for cake, which must always be used for all situations and how to search for solution to any kind of problem faced by an organization. Its philosophy and practice may be a powerful tool, if it is used in a correct and proper way. According to international research, the main causes of abandonment of a particular brand or supplier are • • • • • •
1% death 3% change of city 5% influence of friends and family 9% call of competitors 14% low quality perceived 68% poor quality in care or neglect (Source: Journal News & World Report, 2001).
It appears that, as customers searched attach greater value to the consumer service and quality, organizations instead prioritize the price and possible needs for change. With different vision of the needs, companies fail to enhance the interaction and relationship with customers and turn or return their attention to price that, as we
March 15, 2010
568
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
F. M. Serson
know, depends largely on the perception of value given by the customer, involving other attributes such as brand or convenience (Engel et al., 1995, p. 208). 2. Relationship Marketing The origin of relationship marketing may be linked to marketing efforts based on the database. Despite it being closely linked with a view to promoting (Parvatiyar and Shet, 2000, p. 5), it had the desire to maintain records and information of customers stored and use them over time. Relationship marketing enters the vocabulary of marketing in the beginnings of 1980s with Berry (1983). Vavra (1993) is one of the authors who worried about what happened after the first sell or the first transaction. He proposed actions and tactics of relationship after the first sell. Like them, Peppers and Rogers (2000) won the projection with their one-to-one marketing concept, which focuses the attention of customers as an individual, with the use of databases, and maintenance of market share via long term relationships. At that time, the maintenance of customer information on computers had an insignificant cost when compared to the beginning of use of these machines for that purpose. McKenna (1997), in turn, offers a more strategic perspective of relationship marketing, suggesting a greater involvement in maintaining the relationship through communication and the sharing of knowledge between supplier and customer. Even before that, other authors have been worried with the concepts involving marketing relationship, especially those closest to the marketing services and the long term relationship. Some of these researchers are from the Nordic School, which realised the great importance for marketing services, and developed several searches before the end of the marketing relationship: Gummesson studied the interaction between buyer and seller in 1977, while Gr¨onroos searched the circle the need-adaptation in 1980, using terms such as “buyer-seller interaction” and “interactive marketing.” In 1983, more than 20 years ago, Gr¨onroos spoke of the life cycle of the relationship with the client and Gummesson on the new concept of marketing. In 1984, it was Lehtinen’s turn to talk about the stages of the consumption of services and in 1987, Gummesson returned with the interactive relationship (Gr¨onroos, 2000). From there to here, the various studies seek to explain the phenomena involving the exchanges between the companies, market, environment, employees, suppliers and customers, against the background of not only the victory but also the maintenance of customers in the long term through the interactions and the building of relationships. Understanding Relationship Marketing requires one to distinguish between discrete transaction, in which there is “a distinct beginning, short and end with a good performance” and relational exchange, in which there is “a prior agreement, long term, reflecting an ongoing process” (Dwyer et al., 1987, p. 13). Many definitions have been proposed for marketing relationship and are summarized in Table 1.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
Technology: Information, Business, Marketing, and CRM Management 569
Table 1.
Definitions of Relationship Marketing.
Author Berry (1983)
Berry and Parasuraman (1991) Gummesson (1994) Morgan and Hunt (1994)
Shet and Parvatiyar (2000)
Gr¨onroos (1996)
Definition To attract, retain and — in multi-service organizations — boost the relationship with customers Relationship marketing relates to attract, develop, and maintain relationships with customers Relationship marketing is perceived as relationships, networks, and interactions Relationship marketing refers to all marketing activities directed to establish, maintain, and develop exchanges with relational success Involves and integrates customers, suppliers, and other partners in infrastructure development and marketing activities of the company Relationship marketing is to identify, establish, maintain and develop relationships with customers and other stakeholders, with profit, so that the goals of all parties involved are achieved and that is done by offering a mutual and compliance of promises
Source: Adapted from Rabia, S. Linha ocupada, cliente livre! Um estudo sobre contact centers e fidelidade dos clientes de operadoras de telefonia cellular. Doctoral thesis, FGV-EAESP, 2008.
In the definitions of Table 1, there are some common keywords: keep customers, long term, develop customers, trade and, of course, relationships and interactions. For Berry (1995, p. 151), many services continued to open the doors of relationship marketing, as customers form relationships with people rather than products. For the author, this is due to the fact that services are in a moment of maturity, while the marketing relationship is perceived as a potential benefit for the company and the customer. Besides, Berry points out that relationship together with a policy of, quality both walk together in search of customers’ loyalty. Thus, based on the definition of Gummesson (1994), especially considering the interactions between customers and enterprises an important point for the creation and maintenance of the relationship that is the maintenance of customers (doing business) in the long term. The marketing of relations with clients, also called the relationship management with your client or even as a one-to-one marketing, seeks “to improve the relationship between the company and its customers” (Trepper, 2000, p. 292), targeting the generation of information from them toward achieving a more personalized attention, retaining existing ones and getting new customers. The central idea of these systems is to work with the client and not just for him (Pace, 2000).
March 15, 2010
14:45
570
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
F. M. Serson
According Fingar et al. (2000), customers look to the company, have a fragmented vision of it, defining it with the characteristics of the sector with which they are interacting. In addition, each area of the company treats the client in isolation, as if customers were several independent entities, and each sector has its information about the customer. The philosophy of CRM precisely eliminates the vision part of both parties; customers must identify the company as parts, and integrate the different areas of the company, and must share information about the customer, by individualizing and standardizing it. This means that all information on a client will be in a single database, to which all the functional areas of the company have access. These systems “use the surveys of profiles to generate personalized email, Web content dynamically generated, mail bags, faxes, and phone calls” (Sterne, 2000, p. 297). They include tools that enable better treatment with the client. Streamlining and facilitating the delicate relationship, it becomes ever more important for the company, the loyalty of the customer. And, for that company to obtain such loyalty, it is necessary that it has information on customers and especially their preferences, and it is for this reason that the CRM systems are intended to pass important information on customers, for they are well attended, overcoming their initial expectations. According to Peppers (apud Pace, 2000), they need to follow some tips before the deployment of a CRM, which are • • • •
Actual knowledge of the client Know what he wants Produce exactly what he wants and deliver it no later than previously agreed Make sure the quality of your service or product, thinking then about how to make it more customized.
But only investing in information technology is not enough. The company must also carry out training for workers, seeking the participation of everybody so that they know how to make best use of information about the customer. The CRM seeks to eliminate the concept of “owner of the information.” It happens in order for this information given to be available to all sectors of the company so that, regardless of the industry that the customer needs, it is always satisfied with the care and the attention given. Then it appears, another concern for organizations, is the security and integrity of information relating to customers. So, at any sign of information violation, the company may lose a relationship cultivated for years. Even with these concerns, it is still valid and worth the implementation of CRM, because it provides many benefits to the organization, such as shorter sales cycles, development of e-business, greater knowledge about the customer, full picture of the profile of the customer, administration of the chain of demand, and so on.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
Technology: Information, Business, Marketing, and CRM Management 571
3. Levels of CRM By comparing the CRM with other systems such as SIT, GIS and EIS, we can see that it analyzes the information by customer, while the other focusing on the transaction, indicators by functional area, and in view of the whole organization. “From a technology, CRM involves capturing the customer data throughout the enterprise, consolidates all data captured internally and externally in a central database, reviews the consolidated data, distributes the results of that analysis to the various points of contact the customer, and uses this information to interact with the customer through any point of contact with the company” (Peppers and Rogers Group of Brazil, 2001, p. 35). From a philosophical point of view, we can say that “CRM is a business strategy focused on understanding and anticipating the needs of current and potential customers of a company” Gardner, D, 1999. According Rosolem (apud Pace, 2000), although some providers as SAP, Oracle, J. D. Edwards, People Soft consider an extension of the CRM ERP, these cases represent two distinct markets and efforts. According to those authors, we may infer that while the ERP provides indirect benefits to the customer, because it is oriented to processes, making the information acknowledge the different functional areas and enabling the organization obtain better results. The focus of CRM is the customer, or gives information on certain customers, causing the organization meet and exceed its conduct their expectations. On the basis of authors such as Cravens and Piercy (2008) and Mason (2004), we may establish the division of CRM into three distinct and complementary levels: Strategic CRM: This can be considered the deepest level of CRM. It is that which supports the data and information, that provides subsidies for the managers to analyze and decide based on real information and data from each customer, and not on assumptions, how much cost or required investment in terms of resources both for the maintaining loyalty of certain customers, and what benefit or result that the customer will be providing to the company. Here is the bridge or the link between and the area of CRM and business intelligence (BI) of the company. Analytical CRM: Based on the analysis of individual behavior of each customer, it not only permitted but also facilitated for the manager to know what an individual customer purchase, in which moments this occurs, and what kind of channel that is used or which way the media, to promote the message that the higher effect it causes. Among others, through the use of the analytical CRM, managers can estimate the live time value (LTV) of a particular customer or a specific group of customers and determine whether supplies, subsidies, and tools for managing the work are a more evenly effective strategy for management and sales through the setting of
March 15, 2010
572
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
F. M. Serson
objectives such as to work with cross selling to a particular group of customers or selling up to others. Here, establishing a link with the issue of the database, we found that the database updated constantly should include: • Information on transactions contemplating the history of each customer, with details on the items purchased, date of purchase, price paid, and payment methods used. • Descriptive information where the data should be relevant to the core business of the company, serving as a basis for ranking and segmentation of the market as well as the interactions of promotions and direct marketing. • Contacts Guest: This should include all the customer interactions with the company, with its distributors and representatives (e.g., workshops, technical assistance). This will store the information of each of the customers on applications and complaints and inquiries, requests for services or information, and even the participation of the loyalty customer structure, if that is embraced by the company in question. • Answers to stimulate marketing. Here the database should contain, according to Winer (2001), *3 which is the reaction of a given client when confronted or even stimulated or exposed to an advertisement or a specific initiative for direct marketing. • Operational CRM. Last but not least we have what can be considered the CRM operational CRM, which is the direct contact (front office) with the customer. So, probably the company with the information, requests, and demands operationally integrated through a database, will be able to structure, design, and set process and procedures as the requests submitted by customers. That the level of care and therefore the degree of customer satisfaction tends to improve, allowing not only greater gains in terms of increased sales (cross selling, up selling, and indication), but also the economy of resources in fact due to be avoided both the need for re-employment or re-process in several cases as the need for greater input in the light of such waste. To illustrate our ideas, we may suggest a graphic, where we may see the three levels of CRM, as it seen as follows (Fig. 2). 4. Relationship Marketing and Marketing Services For Rabia (2008), the authors of the Nordic School of Management Science always had the perception that “Relationship Marketing” is intrinsically connected with the provision of services. This is because the services are consumed by a process of consumption and do not qualify as a result of the production process (Gr¨onroos, 2000, p. 96). If we can understand the consumption of services as its production, then one of its main characteristics is the impossibility of storage (Serson, 2006). Thus, this complements the study of Rabia (2008) that has an interface critical to
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
Technology: Information, Business, Marketing, and CRM Management 573 Operational CRM
Analytical CRM Goals and targets of communication and promotion
Customer relations Database of customers Capture and registration of data of customers Customer services
Infrastructure with information on customers
Strategic CRM Relevance of the customer Differential competitive Hierarchy and market segmentation Positioning
Figure 2. Levels of CRM. Source: Adapted from Mason, C. Perspectives in teaching analytically oriented CRM. AMA Faculty Consortium, 2004.
the perception of the client. In services, the supplier often has a direct contact with the customer and thus the relationship can be initiated (if all goes well, of course). The products, for the most part today, cannot be considerate “pure.” It is a mix of products and services in various proportions, including elements such as installation, technical assistance, maintenance, exchange, and training, among others. Another point stressed by the Nordic School, which brings the relationship marketing of marketing services, shows that marketing is perceived as a process involving the organization’s strategy and is not designed as a function of specialists. Based on this principle, the organization must be oriented to the market, if managers have the slightest intention to structure and build the relationship with their customers. Similarly, remember that the whole process of consumption, and not just the result, has relevance for customer satisfaction (Gr¨onroos, 2000, p. 96). The same view is held by Schmitt (2004), when this author points out that the customer experiences is more than finding the desired product in a store or seeing the power of a car. What should be considered is the process. To illustrate this situation, we could take the case of a sports goods and equipment store. In this case, many aspects should be managed as stock, variety and level of service among others. Those aspects are very important to reach that satisfaction. However, other factors concerning the store should be considered fundamental aspects for management. Among these, we may include the care of staff, reception, the environment, housing and, of course, any possible channel of relationship, or the telephone, personal care, and the company’s Web site. We can even consider, as in the case of a car, all the basic attributes of cars could also be seen as similar. It is understood that, in this case, in essence, cars are
March 15, 2010
574
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
F. M. Serson
not different by their sound of the door to open and close, the coating of banks, or the posture of the seller who wishes to undertake the selling, among others. When the central elements become too similar, elements such as peripherals gain importance. Thus the security, the network of authorized workshops, the possibility of interaction and to remove the doubts with the assembly of the vehicle, among others, represent the items to be considered in the purchasing or not purchasing a certain car. With this perspective of marketing, as a process rather than as a simple transaction, elements can be aggregated to offer original products, adding value to the customer. For example, the use of just-in-time delivery can add value to the product and offer a spread for the market. As the products become more similar, the changes in perspective can create differential marketing which are as important to customers as the products themselves. And if these services do not operate households adequately, it can create problems as far as the service center when not working properly. Customers not only look for products or services, but also they request for a much more holistic offering, including everything from information on the best and safest way to use the product until the delivery, installation, upgrade, repair, maintenance and correct solution to what they purchased. And they demand that all that and more is provided in a friendly and reliable way, and on time. In addition, the product itself is less often the reason for the dissatisfaction of the elements that surround it (Gr¨onroos, 2000, p. 100). A channel for interaction and relationship as the contact center can actually be of great importance in building a marketing strategy geared to customers. The service will only be complete if the attributes that surround it are accessible, friendly, and reliable. The contact center is one element, but can also be responsible for addressing several others as technical assistance, spare parts, change in flight schedules, and so on. As the focus of the organization is proceeding to relationship marketing, it is the need to manage the process of interaction. In this process, the customer (or individual consumer) interacts with the supplier of goods or services represented by people, technology, systems, and knowledge (Gr¨onroos, 2000). In corporate business, the number of customers tends to be smaller as the value of transactions is increased; the interaction can be done by professional sales or marketing. For business toward the consumer market, the number rises to millions of customers and value of transactions falls. The interaction in that case requires a larger structure, of the contact centers, but they must meet expectations of similar quality. In marketing, the recognition of the need for interaction for the building of relationships has received the name of integrated communication or communication in total. The marketing of transaction used, traditionally, the mass media and sales force as a tool, and recently, it has also been using higher doses of direct marketing or marketing-based database. Moreover, this tool can also integrate the customer
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
Technology: Information, Business, Marketing, and CRM Management 575
service or the Contact Center to the communication process. This movement is clearly a result of a direction for the relationship marketing. As a good package does not make a good product, the use of new tools of communication will not bring good results if the processes of interaction are not adjusted. The communication, or rather dialog, can contribute to the achievement and development of customers. It is the process of interaction that maintains standards of quality for that dialog occurs effectively. In that scenario, the following are getting importance: the role of contact centers as a channel of communication with the customers and routing of their demands, adding value, and allowing the company to differentiate itself. Some studies show that the interpersonal relationships between buyer and seller affect the relationship between the consumer and vendor. If done well the relationship can lead to satisfaction, commitment and confidence in the supplier, and the intention to repurchase and recommend to other consumers (Guenzi and Pelloni, 2004). The search of a good personal relationship involves not only the moment of selling, but also the whole relationship between customer and company, before, during and after a given transaction. Information from the profit impact of market strategy (PIMS) shows that services to consumers higher allow companies to charge higher prices and achieve higher levels of growth of sales and market share (Chang and Huang, 2000). In the mobile sector, once again, there is the existence of similar services and few opportunities for differentiation in product or service order. At the same time, realize that it is a market that launches news on a continuous basis, whether in technology, pricing plans, or in promotions (usually easily reproduced by competitors). While the customer often cannot follow this dynamism, in addition to improving communication, informing customers who it can benefit could help add value to the relationship and loyalty to the client. 5. Data Registration, Maintenance, and Management For Rapp and Collins (1996), the decade of 1990 was the time for intensive use of marketing-based databases. According to these authors, the Marriot chain of hotels owned more than 5 million members in its database of guests, while they were over 34 million passengers registered in those mileage programs from airlines. In those days, the achievement of customers already believed to be an important aspect of management. But more than that, nowadays we may say it is even more important, because of the fierce competition. And, especially in this context each day is even more important in the maintanance and loyalty from those, who have already brought from the company, or can be qualified or considered as customers of that company. To make this possible and feasible, it is up to organizations in a first moment to collect and store any information that is relevant for a given client for that company. Here, we must draw attention to the fact or the determination with
March 15, 2010
576
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
F. M. Serson
clarity of what may be perceived as relevant information. It is relevant information or given any kind of inherent characteristic of a client who somehow will not only interfere or assist in the management of the company and the satisfaction of the client. An example of this situation is a person who usually always eats his meals in the same restaurant. If he asks for his meal in a given way everyday (without salt, well done, or without mustard for example), the waiter “already knows” that the dish to be served has some special features and also, he has knowledge of exactly how to proceed internally so that the dish demanded, will satisfy that customer. This features that the customer — eats food without salt, has the best done steak — represents important and relevant information to be stored. So far, regardless of who is cooking that meal, or will be preparing the dish, he gets the information, he also knows (the “restaurant” knows) how it should be prepared. Parallely, the more information and peculiarities that the customer has, the more faithful he becomes to that restaurant, because he unconsciously recognizes that he does not need to explain how he wants his meal or what ingredient must be included or excluded from his meal. This is exactly what we mean by the concept of marketing (item “6”). As an initial step to the manager, we have to define the structure of the database. It means to say, if the database will be characterized only by tables containing information of customers, whether we will be on the table or in the current database, qualitative aspects relating to both the taste and preference patterns of the consumer or customer.
6. Marketing: Context and Concept Historical Developments to Focus on Customer The main focus of the administration of companies has varied over time in accordance with the historical moment experienced, featuring several distinct phases. In the 17th century, attention is focused on production. We have the “Era of Production,” initially focused on “production of subsistence,” where the focus of concern is the production for own consumption, have evolved in the 18th century to “production on orders,” where the customer — it is understood that the customer is one who buys or dictates the characteristics of products that he or she will acquire, production which can be interpreted as carried out craft, creating therefore a sub-use of means of production available. Over time, this just leads to producers risking to manufacture goods that more thought to be more acceptable to customers (phase of training for stocks — early to mid-19th century) to use its best ability and the means of existing production. At this moment, the important thing was to “produce.” With the industrial revolution (and the steam engine) to start the process of mass production (of mid until the end of the 19th century), whose main purpose was to reduce the costs of manufacturing aiming at stimulating the purchase and consumption of goods produced.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
Technology: Information, Business, Marketing, and CRM Management 577
With the field of production techniques and distribution, the need of manufacturers becomes the disposal of surplus production, and thus begins the era of selling where, regardless of the asset in question, as well as the scale of production, it is necessary to sell that well (early to mid-20th century). At that time the focus of attention is the development of technical sales in order to generate the volumes of business consistent with the level of production. From mid-20th century, the era of product arrived, in which the focus of attention of the company continues to be internal. They are targeted efforts and the resources spent in order to improve the company’s products, without, however, to see whether these “improved products” went on to meet the needs and desires of customers. Dating from that decade to the beginning of the decade, 1970, follows naturally, the concern with the customer. Organizations, therefore, will present “focus on marketing,” turning into a sales marketing functions of the company. 7. Conceptual Developments There are several definitions of what is marketing. According to the American Marketing Association (AMA), marketing is nothing more than the performance of business activities that drive the flow of goods and services products to the user (1960). The State University of Ohio, believes this definition is a narrow, marketing concept as the process by which the structure of demand for goods and services in society is economic or early fall and met through the design, promotion, exchange and physical distribution of goods and services. In this concept, there is the inclusion of activities such as the development of products and services, and to search prior previous flow of goods or services from the producer to the consumer. Note that these concepts are not included for the non-profit organizations and institutions, a fact that the old 1969, is incorporated by Kotler, Levi and Sydney and the definition. Extending further the scope of this concept, Robert Hass defines marketing as the process of discovery and interpretation of the needs and desires of the consumer to the specifications of products and services, and their creation and expansion of demand for these. Remember that at that time, the mid of the 1970s, the world had already passed by the oil crisis, and companies began to face more competition each day and therefore the philosophy of marketing practice was consolidating. Still, marketing was redefined by the AMA as the process of planning and implementation of the concept, pricing, communication and distribution of ideas, goods and services so as to create exchanges that satisfy individual and organizational goals. However, according to that definition, the process of distribution has just the scope of activities of our marketing; when, in fact, the philosophy of its practice guides us that even after the distribution of goods or services, the organization must seek to keep in touch, that is, to relate to those who consumed what it produced.
March 15, 2010
578
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
F. M. Serson
Thus, Phillip Kotler defines marketing as a human activity aimed at meeting needs and desires of the process trocas. This definition leads to the conclusion that marketing is involved in all processes of exchange; regardless of whether or not flow of goods or services as well as those exchanges involved or not monetary values. From the above, we can infer that marketing is the set of human activities, for the satisfaction and perception of needs and wants of customers of an internal or external organization and the ability of that company fit to address these needs and desires. Important is the distinction that must be emphasized in many cases where clients of a particular organization are distinct from the same consumers. Sometimes, the consumer and the customer are confused as the same person, as in the case of visitors to cinema or a restaurant or from purchasers of appliances. However, there are several situations such as in the case of diapers for babies or school where the consumer is also the child, but those who define and decide the purchase in question are represented by their parents or guardians. The efforts by marketing should be strengtened as it must he vigilant to both the needs of customers and consumers. 8. Practicing Marketing However, if marketing is the quest for satisfaction of needs and desires or the ability to adapt to achieve that satisfaction, as “simply” the ENCHANT it can be interpreted of the client/consumer, whether it be internal or external to the organization. Through the practice of that philosophy, the customer might be satisfied (rewarded) and the organization in time as the current extremely others, guarantees its existence. In practice, there is delight that is set in an easy task, but that the conditions that are basic and fundamental to the performance of a company, regardless of what their mission or what their goals are. MARKETING = ENCHANTING There is no single ideal formula that is appropriate to promote the enchantment of customers and consumers in all organizations because each one operates in a market with their own characteristics and peculiarities, playing a specific role within that market. 9. Marketing and the Moment of Truth In a particularly difficult time for airlines, Jan Carlson, then president of Scandinavian Airline System (SAS) in order to encourage their employees to better serve its customers, coined the term Moment of the Truth. The Moment of Truth is nothing more than any precise moment when a customer/consumer enter into contact with any sector of the company, demanding the fulfillment of a desire and need, based on that contact, form its own opinion on the quality by that company.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
Technology: Information, Business, Marketing, and CRM Management 579
According to Jan Carlson, around 50,000 moments of truth occur in SAS in a day. The moment of truth occurred when the passenger was given the check-in (In airlines occurs when the passenger forward their luggage at the desk of the company, taking his boarding pass and other many times like when the passenger was sitting in the aircraft) for traveling. The demands and requests occur in the many moments among others. When someone calls for information on schedules, stops, prices and frequency of flights, or even during the flight when a passenger asks something to a flight attendant. At those situations, that the satisfaction or not and therefore is in that moment that the consumer/customer is enchants or not the company. For instance, if the passenger was happy for their demands, he probably became a loyal customer. Besides, he or she would become a powerful agent of advertising and promotion of that company. On the other hand, if the client was unhappy, he would not only to find a competitor that offers more satisfaction, but also act as an agent turned into a promoter or a anti-propaganda agent, resulting in loss of the image of that company against its target audience and the market where the company acts. So that, we may conclude the marketing is not presented as a mystic or a recipe for cake that serves to solve both problems of achieving goals and objectives established beforehand. Its philosophy and practice in the state is a powerful tool when used in the correct and proper way, help the administrators of their home conditions and possibilities, to promote the enchantment. As a logical and natural conclusion, when organizations marketing is used, it tends, at times to be extremely competitive and others as the current, to ensure their existence with real opportunity for growth. The practice of day-to-day marketing, that delight in most cases, is not an easy task, but this must be one of the basic and fundamental goals to the performance of a company, regardless of which is its scope of activities, its mission or objectives that is proposed. 10. Conclusion So, we may agree that marketing is not a recipe that comes ready to the manager. Besides, the manager must find the best way to use the theory to decide which the main is and most useful and important data that may support his activities. Analyzing and using these data, knowing how to manage and work based on then, he will probably have a much better relation within customer and clients, leading to enchant these people and attain success. There are no guarantees that acting like this will assure success, but there is no doubt that it is an alternative with great possibility of success and where the company will not have to invest lots of resources in promotion to acquire new clients. Because if the company knows what a client expects (based on information), it will have less operational costs and the promotion will be made by its satisfied and enchanted clients.
March 15, 2010
580
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
F. M. Serson
References Albrecht, K (1998). Revolu¸ca˜ o nos servi¸cos, 5 ed. S˜ao Paulo: Pioneira. Anton, J (2000). The past, present and future of customer access centers. International Journal of Service Industry Management, 11(2), 120–130. Bagozzi, RP (1995). Reflections on relationship marketing consumer markets. Journal of the Academy of Marketing Science, 23(4), 272–277. Carlzon, J (1990). A Hora da verdade. Rio de Janeiro: Cop Editora. Chang, ZY and LH Huang (2000). Quality deployment for the management of customer calls. Managing Service Quality, 10(2). Cobra, MHN (1985). Marketing B´asico — uma perspectiva brasileira ed. S˜ao Paulo: Atlas. Cobra, MHN (2000). Estrat´egias em marketing de servi¸cos. S˜ao Paulo: Cobra Editora e Marketing. Cravens, DW and NF Piercy (2008). Marketing estrat´egico. Sao Paulo: McGraw Hill, (translation of the 8th edition). Cronin, JJ, Jr, MK Brady and GTM Hult (2000). Assessing the effects of quality, value, and customer satisfaction on customer behavioral intentions in service environments. Journal of Retailing, 76(2), 193–218. Day, GS (2000). Managing market relationships. Journal of the Academy of Marketing Science, 28(1), 24–30. Day, GS (2001). A empresa orientada para o mercado: compreender, atrair e manter clientes valiosos. Porto Alegre: Bookman. Dean, AM (2002). Service quality in call centres: Implications for customer loyalty. Managing Service Quality, 12(6), 414–423. Engel, JF, RD Blackwell and PW Miniard (1995). Consumer Behavior, 8th Edn., Forth Worth: The Dryden Press. Gans, N, G Koole and A Mandelbaum (2003). Telephone call centers: Tutorial, review, and research prospects. Manufacturing and Service Operations Management, 5(2). Gordner, D. (1999). CRM gains ground as dynamic e-business app. InfoWorld, 18 Oct, Vol. 21, No. 42, 6–8. Gr¨onroos, C (1990). Service Management and Marketing: Managing the Moments of Truth in Service Competition. Massachusetts: Lexington Books. Gr¨onroos, C (1996). Relationship marketing: Strategic and tactical implications. Management Decision, 34(3), 114–135. Gr¨onroos, C (2000). Relationship marketing: The Nordic school perspective. In Handbook of Relationship Marketing, Shet, JN and A Parvatiyar (eds.), USA: Sage Publications, Inc. Gr¨onroos, C (2003). Marketing: gerenciamento e servi¸cos. Rio de Janeiro: Campus. Guenzi, P and O Pelloni (2004). The impact of interpersonal relationships on customer satisfaction and loyalty to the service provider. International Journal of Service Industry Management, 15(4). Gummesson, E (2005). Marketing de Relacionamento Total. Porto Alegre: Editora Bookman, Segunda Edi¸ca˜ o. Harrison, TH (1998). Intranet, Data Warehouse: Ferramentas e t´ecnicas para a utiliza¸ca˜ o do data warehouse na intranet, p. 359. S˜ao Paulo: Siciliano. Heskett, JL, WE Sasser Jr and LA Schlesinger (1997). The Service Profit Chain. New York: The Free Press.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
Technology: Information, Business, Marketing, and CRM Management 581
Holdsworth, L and S Cartwright (2003). Empowerment, stress and satisfaction: An exploratory study of a call centre. Leadership and Organization Development Journal, 24/3, 131–140. Hunt, SD, DB Arnett and S Madhavaram (2006). The explanatory foundations of relationship marketing theory. Journal of Business and Industrial Marketing, 21/2, 72–87. Kantsperger, R and WH Kunz (2005). Managing overall service quality in customer care centers: Empirical findings of a multi-perspective approach. International Journal of Service Industry Management, 16(2). Kotler, P and G Armstrong (1998). Princ´ıpios de marketing: an´alise, planejamento, implementa¸ca˜ o e controle, 5 ed. S˜ao Paulo: Atlas. Levitt, T (1972). Production-line approach to service. Harvard Business Review, Sep–Oct. Lin, BB and J Darling (1997). A processual analysis of customer service training. The Journal of Services Marketing, 11(3), 193–205. Little, MM and AM Dean (2006). Links between service climate, employee commitment and employees’ service quality capability. Managing Service Quality, 16(5), 460–476. Liu, C-M (2002). The effects of promotional activities on brand decision in the cellular telephone industry. The Journal of Product and Brand Management, 11, 42–52. Lovelock, CH (1991). Services Marketing: Text, Cases, and Readings, 2nd Edn. Prentice Hall: New Jersey. Lovelock C and L Wright (2002). Servi¸cos: marketing e gest˜ao. S˜ao Paulo: Saraiva. Mason, C (2004). Perspectives in teaching analytically oriented CRM on AMA Faculty Consortium. Maxham, JG, III (2001). Service recovery’s influence on consumer satisfaction, positive word-of-mouth, and purchase intentions. Journal of Business Research, 54(1), 11–24. Mckenna, R (1997). Marketing de relacionamento: estrat´egias bem sucedidas para a era do cliente. Rio de Janeiro: Editora Campus. Miciak, A and M Desmarais (2001). Benchmarking service quality performance at businessto-business and business-to-consumer call centers. Journal of Business and Industrial Marketing, 16(5), 340–353. Morgan, RM and SD Hunt (1994). The commitment trust theory of relationship marketing. Journal of Marketing, 58(3), 20–38. News & World Report Journal, 2001. PACF , M. CRM http://www.intermanagers.can.br. 2000. Parasuraman, A, V Zeithaml, L Berry and L Servqual (1988). A multiple-item scale for measuring consumer perceptions of service quality. Journal of Retailing, 64(Spring) 12–41. Parvatiyar, A and JN Shet (2000). The domain and conceptual foundations of relationship marketing. In Handbook of Relationship Marketing, Shet, JN and A Parvatiyar (eds.), USA: Sage Publications, Inc. Peppers and Roger Group do Brasil (2001). CRM Series — Call Center One to One. S˜ao Paulo: Makron Books. Peppers, D and M Rogers (2000). Marketing 1 to 1: um guia executivo para implantar estrat´egias de customer relationship management. Pepers and Rogers Group do Brasil. Rabia, S (2008). Linha Ocupada! Cliente livre. Tese, Doutorado em Administra¸ca˜ o de Empresas, Escola de Administra¸ca˜ o de empresas de S˜ao Paulo da funda¸ca˜ o Get´ulio Vargas, Eaesp, FGV S˜ao Paulo. Rapp, S and TL Collins (1996). The New Maximarketing. New York: McGraw-Hill.
March 15, 2010
582
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
F. M. Serson
Rohde, F (2005). Little decisions add up. Harvard Business Review, (June). Schmitt, BH (2004). Gest˜ao da experiˆencia do cliente: uma revolu¸ca˜ o no relacionamento com os consumidores. Porto Alegre: Bookman. Serson, FM (2006). Estrat´egia de marketing para empresas de servi¸cos. In Marketing: Estrat´egia e Valor (Coordinated by S´ergio R.) S˜ao Paulo: Editora Saraiva. Sheth, JN and A Parvatiyar (2000). The evolution of relationship marketing. In: Sheth, JN and Parvatiyar A, Handbook of Relationship Marketing. London: Sage Publications Inc. Sterne, J (2000). Customer Service on the Internet, New York, NY. John Wiley & Sons, Inc. Swift, R (2001). CRM. Customer relationship management: o revolucion´ario marketing de relacionamento com o cliente. Rio de janeiro: Ed. Campus. Taylor, SA, A Sharland, J Cronin Jr and W Bullard (1993). Recreational service quality in the international setting. International Journal of Service Industry Management, 4(4), 68–86. Thompson, B (2005). The loyalty connection: Secrets to customer retention and increased profits. CRM Guru. March, 2005. Dispon´ıvel em: http://www.techworld.com/cmsdata/ whitepapers/4206/The%20Loyalty%20Connection%20-%20Cust%20retention.pdf. [12 out. 2007]. Trepper, C (2000). Estrat´egias de e-commerce, p. 319. Rio de Janeiro: Campus. Truell, AD (2003). Use of Internet tools for survey research. Information Technology, Learning, and Performance Journal. Morehead, KY: Morehead State University, 21(1), 31. Turban, E, E McLean and J Wetherbe (1996). Information Technology for Management, p. 801. New York: John Wiley & Sons. Vargo, SL and RF Lusch (2004). Evolving to a new dominant logic for marketing. Journal of Marketing, 68(1), 1–17. Vavra, TG (1993). Marketing de Relacionamento: After Marketing. S˜ao Paulo: Atlas. Whitehead, D (2007). Data warehousing: Winning the loyalty game. Telecomunications Online, August 1999. Dispon´ıvel em: http://www.telecommagazine.com/issues/ 200003/tcs/churn.html. [16 maio 2007]. Winer, RS (2001). The framework for customer relationship management. California Management Review, 43(4), 89–105. Zeithaml, VA, LL Berry and A Parasuraman (1988). Communication and control processes in the delivery of service quality, 1988. In Services Marketing: Text, Cases, and Readings, Lovelock, CH (ed.), 2nd Edn., pp. 406–423. New Jersey: Prentice Hall. Zeithaml, VA and M Jo Bitner. Marketing de servi¸cos: A Empresa com Foco no Cliente, Bookman Companhia Editora. 1st Edition, 2003.
Biographical Note Fernando M. Sersen is a coordinator of CRM Studies Area within GVCENPRO (Center of studies of communication business). He is a creator and one of the coordinators of “SQA” — Quality in Service and Attention of the Client Certificate within GVCENPRO. Also, he is a director of QUES: Quality and Excellence in Services — a consulting company in marketing, services, marketing strategic and planning, developing and implementing CRM strategy (www.ques.com.br). He
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
Technology: Information, Business, Marketing, and CRM Management 583
received his PhD degree in Business Administration, with an emphasis on Operations at FGV-EAESP, 2006. He received his MSc in Business Administration, with an emphasis on Marketing at FGV-EAESP, 1998. In 1995, he obtained a bachelor degree in law studies at Universidade de S˜ao Paulo. In 1993, he received degrees in Business Administration, Bachelor in Business Administration, with a specialization in Marketing at FGV-EAESP. The following awards has been conferred on him: (i) Unibanco University performance award, 1992; (ii) Academic award in study of Business Administration, Regional Council of Administration, 1992; and (iii) “City-Award” prize for being the best student of the 65th class of Business Administration at FGV-EAESP, 1993.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch23
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Chapter 24
Transfer of Business and Information Management Systems: Issues and Challenges R. NAT NATARAJAN College of Business, Tennessee Technological University, 407A Johnson Hall, Box 5022, Cookeville, TN 38505, USA [email protected]
Globalization and the emergence of the e-economy have created huge opportunities for businesses to grow and expand the scale and scope of their operations. Taking advantage of these opportunities, many businesses have responded by transferring or replicating their management systems and processes in other locations and markets where they are expanding. Moreover, there is a recent trend in business to develop and deploy systems company-wide and even in their supply chains. Enterprise Resource Planning systems, Quality Management Systems, and process improvement systems like Six Sigma and Lean Production serve as examples. The goal is to realize the economic benefits of standardization and harmonization across the board in the enterprise. There are a number of difficulties with the approaches companies have used for this purpose. For instance, it may not be possible or even desirable to prevent the divergence between identical systems that begin to evolve in different directions. Identical processes and procedures when replicated can lead to different practices and outcomes. On the other hand, there have been other instances, e.g. the successful replication of lean production systems, which have resulted in similar outcomes in different organizations and situations. This chapter identifies and addresses such issues and the challenges. It examines, based on examples and cases, different approaches for transferring management systems, processes, and practices and their effectiveness. Keywords: Transfer of best practices; transfer of management systems; replication of processes.
1. Introduction A number of factors and developments have contributed to the transfer of management systems, processes, and practices. This process is now underway on a large scale in organizations worldwide. The drivers for this process include mergers and acquisitions, globalization, joint ventures, strategic alliances and partnerships, outsourcing, and supply chain management. Mergers and acquisitions have become one of the fastest and most popular methods for growth (Martinez, 2005). Companies like CISCO have grown by acquiring smaller companies. Business opportunities created by globalization have led to investment in plants and even 585
March 15, 2010
586
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
R. Nat Natarajan
R&D facilities overseas to serve the markets in those locations or to serve as export platforms. The trend in outsourcing has often meant transferring processes and systems to outside entities in the value chain. For instance, General Electric (GE) requires all of its subcontractors to use the Six Sigma methodology. Strategic alliances and partnerships often call for greater integration of the systems and processes of the partners (Haigh, 1992). One of the raison d’ˆetre and the motivation for these alliances is the opportunity for learning about the systems, processes, and best practices of each partner. On some occasions, the crisis faced by the company, e.g., Ford in 1981, has acted as the catalyst for transferring such systems (Womack et al., 1991). Each of the situations described above necessitates the transfer of management systems, processes, and the spread of practices. For example, acquisition of companies with different information systems creates the need to standardize and harmonize these systems. Enterprise Resource Planning (ERP) systems are often implemented to provide the necessary standardization. Likewise, there is a push to implement and standardize management systems such as ISO 9000, ISO 14000 — and for public companies listed in US stock exchanges — systems for complying with the requirements of Sarbanes-Oxley Act, in different operating units of the company. Transfer and standardization of processes and practices are required in the case of global expansion as well. In one sense, this is an age-old problem dating back to the days of East India Company and the British South Africa Company of Imperial Britain operating in the colonies (Litvin, 2003). Three centuries later, in a curious reversal of history, companies from countries like China which were under colonial domination are facing a similar problem (Zhang and Edwards, 2007). This issue is discussed in another section of this paper. In another context, models for performance excellence such as the Baldrige National Quality Award Criteria emphasize the deployment of the methods for performance improvement across organizational units (NIST, 2008). Even companies that have won the Baldrige award find this to be a challenge (Brown and Duguid, 2000a). Given these momentous developments in the business environment, it is clear that managers will be faced with the challenges of transferring business and information management systems for sometime to come. 2. Literature Review Review of literature indicates that scholars have identified the different types of barriers to diffusion of best practices. Some provide a theoretical perspective on the impediments. For instance, Simard and Rice (2007) analyze various barriers related to an organizational context, the nature of the diffusion process and management structures and policies. Schonberger (2008) has addressed the issues of transferring best practices from a practitioner’s standpoint. His context is a more limited one of process improvement using lean Six Sigma tools. Natarajan (2006) has discussed the challenges and opportunities in transferring best practices from
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Transfer of Business and Information Management Systems
587
other industries to healthcare. As far as transfer of business and management systems is concerned, the transfer of Toyota’s lean production system has received a lot of attention. For instance, Mann (2005) focuses on the cultural transformation necessary for successfully implementing lean production. Liker and Meier (2006) provide a hands-on approach to implementing the lean production system of Toyota. Smith (2005) has discussed the difficulties associated with the replication of ISO quality management system. The unique and fascinating challenges of transferring knowledge- and innovation-intensive systems using the new paradigm of open innovation model have been the subject of the writings of Chesbrough (2006) and Huston and Sakkab (2006). The relevance of the Toyota production system to healthcare and how to replicate its successful outcomes in the healthcare setting have been the concerns of Spear (2005) and Graban (2008). 3. Methodology This chapter does not attempt to discuss the categories and the myriad factors that can impede successful transfer of best practices. Nor does it address the technical issues concerning, say, the implementation of information systems like ERP. In this chapter, actual examples and business practices serve to focus on managerial issues and as the sources for insights and lessons learned. Practice, not theory, is the starting point. Also, the scope of this chapter is broader than the transfer of best practices. The objective of this chapter is to illustrate, critique, and evaluate through actual case studies and examples, the overall approaches organizations have used in transferring not only best practices but also business and information management systems which could involve transforming the organizational culture as well. 4. The Approaches Typically, the approaches that companies use in transferring management systems can be classified as belonging to one of the three categories. First, there are companies that use what can be labeled the one-size-fits-all approach. While this approach has generally proved to be ineffective, in some special cases, it has been turned out to be a successful approach. Next, there are companies that try to use a customized approach that recognizes that the transfer process has to take into account the variation in organizational cultures, work practices, infrastructures, and a whole host of local factors. For the purpose of discussion in this chapter, it will be called the nuanced approach. It must be noted that just because these firms are aware of the need to customize their approach does not imply that they have figured out how to do this effectively. Finally, there are companies that have been able to develop an approach that has resulted in effective transfer. In this section, the issues and challenges associated with the above three approaches will be brought out through examples. The lessons to be learned from these illustrative examples are also discussed.
March 15, 2010
588
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
R. Nat Natarajan
4.1. One-Size-Fits-All Wal-Mart, in the 1990s, learned the hard way that the world is not ready for doing business the Wal-Mart way. For a while, from product selection to supplier relations, it got everything wrong in Argentina and Brazil as it tried to replicate the business practices and processes that have worked in the United States (Friedland and Lee, 1997). Initially, the merchandise selection in stores included live trout, cordless tools, and American footballs! It took a while for Wal-Mart to learn that South Americans are not tools-loving do-it-yourselfers like many of its customers in the United States and that soccer is the most popular sport down there. Wal-Mart’s attempt to follow its aggressive pricing policies with South American suppliers backfired as some suppliers refused to sell to it. Wal-Mart is not the only corporate giant that overlooked the variations in markets and business practices. Lucent Technologies’ reorganization of its three divisions into 11 business units was supposed to mimic independent entrepreneurial start-ups? that were close to their markets (Christensen and Raynor, 2003). Now, who could be against increased autonomy, pushing decision-making down, flattening the hierarchy, and fostering innovation, as these have been the characteristics of so many successful start-ups? However, it was precisely the wrong prescription for Lucent because it was not selling modular self-contained products. Its customers operated complex networks which required coordinated solutions. Its telecommunications equipment had to be sold and serviced in a way that called for interdependence — and not independence — among its employees. By following a generic advice and not paying attention to its unique circumstances, for Lucent, the results were quite the opposite. In another example, a US manufacturer bought a manufacturing facility in the United Kingdom in order to have a European presence. Both facilities designed, manufactured, assembled, and serviced similar equipment for the same international customer. As the two plants started to work on several common projects, it seemed logical that the quality systems of the US plant should be replicated in the UK plant (Lore, 2004). A quality coordinator was hired to create an identical system in the UK plant. This effort failed for the following reasons: (1) the international customer had different quality standards in the United States and United Kingdom. In the United States, they asked for QS-9000/TE, while in Europe, which lagged in automotive standards, ISO 9001 was sufficient; (2) the registration infrastructures were different, causing the two systems to diverge; and (3) the customers’ purchasing organization in the United States wanted the US facility to also comply with the ISO 14000 environmental standards. This introduced further variation in the two quality systems. Cultural differences also added to the variation. Eventually, the quality system in the UK facility was allowed to evolve on its own. In a different context, Sitkin and Stickel describe how the attempt by managers to implement Total Quality Management (TQM) practices uniformly across
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Transfer of Business and Information Management Systems
589
a company backfired (Sitkin and Stickel, 1996). The approach did not address the different perceptions of quality among the different groups. These groups did not want to be judged by the quality standards set by others. The result was the spread of distrust and division. A legitimate question is, “Are there situations where this approach might actually be effective?” The answer is in the affirmative but with some caveats. There are well-defined business processes such as shipping and receiving, billing, and order entry that have to be performed in any organization. These processes have measurable inputs and outputs and clear relationship between the inputs and outputs. Such processes can be transferred to other locations with the people performing the process considered as interchangeable. The recent wave of outsourcing of business processes has primarily targeted such processes for transfer to low wage locations like India (The Economist, 2004). But even for such processes, identical outcomes are not guaranteed in different locations because of variations in the practice and in the social and cultural setting of the process. Replicating the process is not the same as replicating the behaviors. It is instructive to examine two instances of successful transfer of not just processes but the entire production and business system with the one-size-fitsall approach. The first is the case of McDonald’s, which has been a pioneer in replicating the service process and its technology in thousands of its restaurants, majority of which are owned and operated by franchisees. Starting first in the 1970s with its stores in the United States, it has been able to deploy this approach in its overseas locations as well. Successful replication remains a critical centerpiece of McDonald’s growth and expansion strategy through the years, even as it has updated its product offerings and technology to keep up with the changes in the marketplace and the advances in food preparation science. The case of Intel, which is the second example, merits analysis and discussion in its own right. It also serves to make the point that effectiveness of this approach depends on the fit with Intel’s business strategy and the company’s particular way of implementing continuous improvement. This is Intel’s “Copy Exactly” methodology. Intel introduced it in the 1980s and later refined it through the 1990s (Intel, 2008). It was developed to address a critical problem that occurs in semiconductor manufacturing when ramping up high volume manufacturing of a new line of microprocessors. The equipment and the process for high volume production has to be adapted from the process and equipment used in the research and development stage, where the product is first produced. The process and the recipe can vary from one development site to another. This adaptation and modification results in delays in ramping up and in yield losses. Intel’s solution to this problem was to duplicate everything from the development plant to the volume-manufacturing plant. The process devised at the development facility is designed not just for performance and reliability, but for high-volume production as well. To ensure this, production personnel and managers from high-volume facilities participate at the development
March 15, 2010
590
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
R. Nat Natarajan
plant as a new process technology is created. Then this process is copied exactly at the high volume manufacturing plant. In its early days, “Copy Exactly” was applied only to the technical system — only equipment and process parameters were copied. Later, it was expanded to include the managerial aspects as well. Now, everything at the development plant — the process flow, equipment set, suppliers, plumbing, manufacturing clean room, training methodologies and even the bolts and the concrete foundation — is selected to meet high-volume needs, and then copied exactly to the high-volume plant. Intel has been able to decrease the time-to-market and improve yields. Moreover, since all its factories are nearly identical in function, problem solving, learning, and sharing across the production network are facilitated. Solutions discovered at one site can be transferred and are likely to work at another site with a high degree of confidence. This strategy has also contributed to improvements in labor productivity and increases in revenues. Encouraged by this success, it has extended “Copy Exactly” to maintenance practices and hopes to realize improved quality and reduced cycle time (Mutschler, 2005). One implication of this strategy is because the recipe and the process are first frozen before they are copied exactly, opportunities for improvements or innovations in the recipe are ruled out. They are put on hold and have to wait until high yields in volume production have been consistently achieved. Terwiesch and Xu (2004) have argued through an analytical model that trading off this postponement of improvements or recipe changes with current gains in yields due to standardization is optimal under certain conditions that firms like Intel face. Another implication is that to accommodate the requirements of copying exactly, Intel may be forced to acquire and hang on to outdated equipment (Terwiesch and Xu, 2004). It is worth noting that “Copy Exactly” has not become the standard practice in semiconductor industry. Advanced Micro Devices, a rival of Intel, does not follow this strategy, preferring to innovate on the production floor (Pfeiffer, 2003). It terms its approach “copy intelligently!” In contrast, McDonald’s production system and practices have been widely copied in the fast-food industry and even beyond, e.g., its production processes have been studied and benchmarked by Ford. Why did this approach work for McDonald’s and Intel but not for say WalMart? In the cases of McDonald’s and Intel, the approach was mutually consistent with their overall business strategy of competing by providing products with highly uniform attributes (for Intel) and highly uniform customer experience (for McDonald’s) independent of the location of the production system. The other critical success factor is the painstaking attention to details. It is said that McDonald’s applied rocket science to fast-food business, which is considered as a low-tech industry. It focused on the numerous process variables that had to be controlled to achieve standardization. One can say that Intel, despite being in a rocket science type industry was actually paying homage to McDonald’s by following the strategy of replication at the minutest level of detail. When McDonald’s opened its first store
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Transfer of Business and Information Management Systems
591
in 1990 in the former Soviet Union, it flew in all the supplies at a great cost from outside locations because it could not find suppliers in Soviet Union who could meet its stringent requirements. It was as if a typical McDonalds’ restaurant had been physically transplanted to Moscow, but was tethered to a supply chain outside the country without any local linkages. As a parallel (Pfeiffer, 2003), “If a pipe that delivers chemicals to one of the chipmaking machines is 20 feet longer in one factory than in another, Intel will make it match (it will even match the number of bends in the pipe). If water quality is different among the factories, then “our purification system is meant to eliminate that difference,” says Brian Harrison, Intel’s vice president in charge of manufacturing. How do these companies deal with the people element which can be a major source of variation in different locations? McDonald uses its training center — sometimes called “hamburger university” — to thoroughly train its employees and franchisees in store operation and achieve process standardization. Like Intel, it also has created learning laboratories where new food preparation and storage technologies and new product offerings are tested for their impact on business performance before their introduction in the restaurants (Operations Management, 2004). Intel, on the other hand, has greater opportunities for using automation to eliminate the human variable but where automation is not feasible it goes to extraordinary lengths to ensure that its employees have the same knowledge to perform the tasks in the same way. “Intel shipped 300 New Mexico workers to Oregon, where they spent a year working side by side with the R&D engineers, learning everything they needed to know. They picked up what Bruce Sohn, a manager at the New Mexico plant, calls “tribal knowledge” — information that Intel’s most experienced employees know but may not have written down. “We want to copy everything — even the subtle things we may not even acknowledge that we do,” (Pfeiffer, 2003). The moral of the story is one-size-fits-all can be made to work if there is a fit with the organization’s strategy and if the organization is willing to commit significant amount of resources for learning, measuring, and controlling a very large number of variables (technical, social/behavioral, and managerial) in the business. Clearly, neither of this applied to Wal-Mart in South America. 4.2. Nuanced Approaches Some companies are aware that the replication of systems have to address the local factors and that systems cannot be transferred lock, stock, and barrel. For instance, ISO 14000 environmental management standards and certification have to be sitespecific as local laws and regulations for the environment are going to be different. This limits the opportunity for developing uniform procedures and documentation that apply across locations. But early awareness of such issues does not mean that they have been effective in developing a customized approach. For example, when Hutchinson Fluid Transfer System (HFTS) started a new plant in Mexico, the products, the production process, and customers for its US and Mexican plant were the
March 15, 2010
592
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
R. Nat Natarajan
same. Therefore, it wanted to mirror the systems of its US plants. The objective was to create equivalent quality management systems (QMS) ensuring consistency in processes, common forms, and documentation. Initially, HFTS wanted the new Mexican plant to achieve this objective by developing the equivalent system on its own (Smith, 2005). But soon, resource (or lack thereof) issues cropped up, and there was a time constraint for certification created by a customer’s requirement. Therefore, the corporate side had to intervene and the decision to replicate rather than mirror the process was made. But local factors such as different registrars, need for hiring and training, and translation of documents into Spanish had to be addressed. Meaningful technical translations were a challenge — for instance, the translation software translated “preventive maintenance” (p.m.) as “afternoon” in Spanish! The IT infrastructures were not the same either. Turnover of employees was another issue. Creating well-defined roles, differences in educational levels, a sense of urgency, working until 7 pm with long lunch breaks, and having many religious holidays were some of the other local factors to be addressed (Smith, 2005). Having centralized decision-making for allocating shared resources, which were dispersed in many locations, multitasking, and downsizing also created problems. The result was the creation of a customized system with variation instead of an identical system. Even with a customized approach, infrastructure both in the physical and in the organizational sense can present significant barriers to transfer and standardization. In countries like India and China, power supply is unreliable and power shortages are endemic. Companies trying to deploy just-in-time delivery processes in South America, China, or India will have to contend with poor road networks, ports, and logistical infrastructure. For instance, in Argentina, there are three different gauges for rail traffic, making even intra-modal movement of freight very cumbersome. A very striking example of infrastructure preventing the transfer of even simple practices is from the field of medicine. Cocktails prescribed for HIV-AIDS patients which proved to be effective in developed countries are practically useless in countries like Botswana, which has high incidence of the disease. This is because these cocktails have to be taken at exact time schedules, and people there cannot afford to wear watches (Thomas, 2005). Organizational infrastructure matters, too. For instance, in the 1990s, Cleveland, US-based Lincoln Electric, a maker of welding equipment, acquired companies overseas and wanted to implement its piece-rate incentive system in those acquired units (Hastings, 1999). This system had proved to be very effective in improving productivity in its US operations. This system had attracted a very hardworking workforce with low turnover. The characteristics of the workforce in the acquired units turned out to be markedly different. Those workers did not view this system as a motivator for increasing their earnings. Because this system was a key component of Lincoln’s operations strategy, nonacceptance of it resulted in Lincoln’s first loss in its 100-year history (Roberts, 2004).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Transfer of Business and Information Management Systems
593
Sometimes the critical issue is not related to customizing but one of transferring certain aspects of the system without any deviations. The tricky part here, of course, is knowing what are these aspects. The experience of Ford in transferring systems from Mazda serves as a case in point (Haigh, 1992). During the early 1980s, Ford and Mazda established a plant in Hermosillo, Mexico to produce the Mercury Tracer, which was exported to the United States. It was originally envisioned as a project to solve the foreign exchange deficit problem that Ford was experiencing in Mexico. Later, the venture was considered by Ford as a laboratory in which it could learn the methods of lean production from Mazda. It involved transfer of the technical (product design, tooling, and equipment) and the management (quality, and human resource management) systems. The plant was completed on time and on budget. It also turned out to be one of the most productive and achieved levels of quality that were world class (Womack et al., 1991). Hermosillo was a greenfield site with inexperienced Mexican workers. But this was viewed positively, since a young workforce is more likely to be eager to learn. In the established plants in the United States, labor policies such as too many job classifications had made it difficult to develop quality and training systems. In the case of Mazda’s body shop technology and quality systems, Ford managers believed that they should not pick and choose elements of technology and systems to be transferred but that the entire system had to be transferred as a package deal. The objective was to learn and apply the know-how and not worry about the know-why. It was felt that there were certain aspects of the technology which were tacit and cannot be codified and explained. It is worth noting that the plant manager felt that problems occurred whenever Ford deviated from the equipment and accessories specified by Mazda. But this approach was not used when it came to the parts that were to be produced by Mexican suppliers. About 70% of the parts had to be imported from Japan (which increased the cost of the vehicle), and the rest had to be purchased from Mexican suppliers to satisfy local labor content laws. The drawings of the parts were obtained from Japanese suppliers and given to the Mexican suppliers, who then designed the parts, the equipment, and tooling on their own. The design, the process, and the tooling were not licensed from the Japanese suppliers and transferred as one integrated system to the Mexican suppliers. The reason was Ford wanted to reduce costs. This turned out to be shortsighted, as problems arose with the quality of Mexican parts, and technical assistance from Mazda was later needed to fix the problems. In deploying systems across different locations, a major issue is often identifying what should be global and what should be left under local autonomy. Johnson Controls Government Systems and Services (GSS) Division had to address this issue when its multiple sites were seeking ISO 9000 certification (Mercier, 2002). Each site manager, thinking their site had unique requirements wanted to implement ISO in their own way. Soon GSS had different ISO certificates for each location, different registrars, different audit programs, different consultants, separate ISO 14000 (environmental management) programs, and different procedures for the
March 15, 2010
594
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
R. Nat Natarajan
same process. Critical support processes such as human resources, finance, safety, and legal were not included in internal audit program. The challenge here was to identify the common (global) features and standards for certification that can be applied across the entire organization. GSS addressed this at a more fundamental level by designing a global business operating system (BOS). The approach was to deploy standard business practices rather than deploying ISO systems. The key was designing BOS around the business and not around the ISO systems. The common global features included providing templates for process mapping, process documentation, internal auditing, and making English as the language to be used at all sites for ISO certification purposes. If a country’s management representative wanted to use a different language, then they were responsible for localizing the global BOS model. The role of internal auditors was changed to one of process facilitation. BOS provided a road map for each site to follow. All processes were maintained on the company intranet available to all operations worldwide. BOS was deployed to locations in 13 countries in three continents. GSS was able to realize significant benefits in terms of cost avoidance, reduced time for certification, and increased flexibility (Mercier, 2002). 4.3. DNA and Culture Toyota’s experience in transferring its famous lean production system and practices in its own operations and to the operations of its suppliers has been a very successful one. It also offers some valuable lessons. The elements such as empowered work groups, total quality control, and elimination of waste through continuous improvement (kaizen) that constitute this system, and the development and refinement of this system through the 1950s and 1960s in the plants at Toyota City in Japan have been well chronicled in many books and articles (Hall, 1983; Shingo, 1983; Womack et al., 1991; Womack and Jones, 1996; Schonberger, 1996; Monden, 1998). As the Toyota Production System (TPS) — and its cousin, the Toyota Product Development System — have become the de facto standards, manufacturers (in the automotive and non-automotive industries) worldwide want to replicate this system. Consequently, issues relating to the deployment of this system are now receiving more attention. Womack and Jones (1996) have outlined the steps that lead to the transformation in to a lean system. Spear and Bowen (1999) have captured the tacit knowledge that underlies TPS in terms of rules they label as its DNA. Liker (2003) has developed a set of 14 fundamental principles that captures the essence of the system. Understanding these principles and the DNA is essential to any effort to transfer such a system. But beyond that, what really matters in successful deployment is the customer-focused culture that Toyota has created and fostered. This culture is the critical enabler in replicating the TPS (Mann, 2005). While the rest of the industry focuses on the aspects of the product and technologies, Toyota’s people talk about “The Toyota Way” and about customers. While its mission reads
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Transfer of Business and Information Management Systems
595
like any other company mission statement, the key difference is it is being lived by its employees in different continents. According to Jim Press, the boss of Toyota’s sales in North America, “The Toyota culture is inside all of us. Toyota is a customer’s company,” he says. “Mrs. Jones is our customer; she is my boss. Everything is done to make Mrs. Jones’s life better. We all work for Mrs. Jones.” (The Economist, 2005a, p. 66). Toyota and Honda (which, following Toyota, is also a lean automaker) have been able to transfer this system to its suppliers as well, but not by issuing edicts to those suppliers. Taiichi Ohno, the father of TPS, believed that the achievement of business performance of the parent company by bullying its suppliers is totally alien to the spirit of TPS. Toyota and Honda spend a lot of time learning about their suppliers and investing in building relationships (Liker and Choi, 2004). They were very demanding but also very supportive of their suppliers. Although their suppliers view them as their toughest customers, in a benchmark OEM (Original Equipment Manufacturers) survey they rated them as the most preferred customers to work with. The strategy and approach Toyota and Honda use to develop suppliers and transfer their lean production systems have been described in the literature (Bounds, 1996; Bounds et al., 1996; Liker and Choi, 2004). Emphasis — not on information technology, but on people — relationships, and learning, i.e., elements that affect organizational culture characterize their approaches. The success Toyota has had with its approach was evident when the plant of one of its sole suppliers (Aisin Seiki) for a brake component burnt down in 1997. The entire supplier network — some of the suppliers were competing with each other — responded with cooperation and speed to solve the problem. Toyota only provided centralized guidance but not centralized control during the recovery efforts. The right culture had been created, and the DNA had been implanted. It had replicated itself in the supplier network which had become a self-organizing entity. In the event of the crisis, it responded like a group of empowered employees in TPS who respond to a problem when an employee pulls the cord to stop the line. In this case, Toyota had pulled the cord, and the suppliers did the rest (Nishiguchi and Beaudet, 1999.) It is worth noting that Toyota has allowed one of its major competitors access to the workings of TPS. In 1984, General Motors (GM) reopened the plant in Fremont, California (closed in 1981 due to labor disputes) as the New United Motor Manufacturing Inc. (NUMMI) plant. It was a joint venture production sharing agreement between GM and Toyota that provided GM with the opportunity to observe and learn at close quarters the principles of TPS. It was a successful experience for GM. It bought into the Toyota culture. United and New in NUMMI stood for a united GM management and labor and new relations between them (Womack et al., 1991). However, GM has found it very difficult to transplant what it has learned at NUMMI to other plants in the GM system. There are other examples where culture change seems to play a crucial role in implementing elements of TPS (Tonkin, 2005). Even one aspect of it, like the pull system using kanban cards, involves culture change (Cook, 2005).
March 15, 2010
596
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
R. Nat Natarajan
The difficulties organizations have in replicating TPS can be traced to its crucial characteristics. First, it is a coherent system, i.e., all the elements in the system are logically interconnected. For instance, to reduce the waste in the form of inventory, production must be in smaller lots. To enable economical production in smaller lots, set-up time on equipment must be reduced and so on. This means a company cannot cherry-pick elements of it for implementation. Implementing bits and pieces of the entire system, e.g., setting up only pull system of production, misses the whole point and is counter-productive. It is also to be noted that the scope of TPS goes beyond production activities, encompassing other functions in the organization such as marketing and human resources. The other aspect is the fit between the system and the business environment in which it was developed. Toyota developed this system in the 1950s and 1960s when it was a provincial and not a global company. All its operations were centered at Toyota city. Most of its suppliers, who were all Japanese, had their plants in close proximity. This enabled face-to-face interaction with the suppliers on a daily basis and made it easier to deploy these practices. This helped in developing trust-based relationships, i.e., relational contracts, which are discussed later. Dell, a company as successful and as pioneering in changing the way business is done as Toyota, follows a similar approach. It is a combination of discovering its DNA and then creating the Dell culture, in which it is embedded. For Dell, the DNA lies in its execution and operational excellence. Information is viewed as its most important management tool.Yet, Dell has not invested in systems like ERP Systems because ERP does not support Dell’s goal of speedy execution. Its social contract with its employees has been articulated as the “Soul of Dell” with customer focus, open and direct communications, good global corporate citizenship, and having fun winning as its pillars (Dell and Rollins, 2005). As Dell grows and adds employees in Asia and Europe, it wants them to imbibe this culture. In 2001, winning culture became a strategic initiative for Dell. It was deployed by the same three objectives and supported by same metrics that applied worldwide. Dell, like Toyota, invests in people and leadership development. The term culture here is to be interpreted as patterns of collective behavior in the organization. Meshing or clashing of corporate cultures has played an important role in making or breaking mergers and acquisitions, validating a saying popular in business that an organization’s culture eats its strategy for breakfast every time! When companies with dominant cultures merge, e.g., Daimler and Chrysler, a common culture may not emerge. This becomes a barrier to implementing common systems and realizing the purported gains from the merger or the acquisition. Mixing and matching what might seem to be the best aspects of the two cultures and the organizations may actually lead to the least desirable outcomes (Roberts, 2004). In organizations with strong cultures like Pal’s Sudden Service, a highly profitable regional fast-food business which has won the Baldrige Award in the small business category, a key factor to be considered in the expansion of the number of its restaurants is the possible dilution of culture (Crosby, 2008). As long as
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Transfer of Business and Information Management Systems
597
that culture is preserved, it becomes so much easier to transfer the management systems to the new stores. What distinguishes the culture at Toyota or Dell from other organizations with strong cultures is that these patterns of behavior are expressions of relational or implicit contracts. Organizational theorists and sociologists would call them trust relationships. They differ from the classical contracts in the sense that employee behavior in all possible contingencies and events is not specified in advance. Many of the critical terms of the contract are implicit and hence cannot be enforced through legal mechanisms. What ensures enforcement of these terms of the contract are reciprocity and the fact that interactions between parties will be repeated (Kay, 1995). The relational contracts do carry the risk of short-term opportunistic behavior to further individual rather than collective interest. Relational contracts are conducive to long-term relationships and indeed become long-term relationships. They are particularly useful when rapid information flows and flexible responses are required in the relationships, i.e., they enable learning and continuous improvement. It can become a source of competitive advantage. One reason why other organizations find it difficult to replicate or transfer systems and practices is because they are not able to replicate the internal and external architecture based on relational contracts. That is, why Dell is not worried about its employees being poached by other organizations, and why Toyota is willing to share the secrets of TPS with major competitors like GM and even with companies who do not supply Toyota but are suppliers to its competitors (Bounds, 1996)! This architecture is the foundation on which systems and processes can be implemented. Toyota’s relationships with its suppliers are based on relational contracts. While it builds long-term relationships, Toyota does not award long-term classical contracts to its suppliers. Their suppliers get only annual contracts (Bounds, 1996). Relational contracts often have social roots (Kay, 1995). For instance, Toyota President Mr. Fujio Cho thinks, “Something of the unique Toyota culture comes from the fact that the company grew up in one place, Toyota City, 30 min drive from Nagoya in central Japan, where the company had four assembly plants surrounded by the factories of suppliers. In this provincial, originally rural setting, Toyota workers in the early days would often have small plots of land that they tended after their shift. The fact that Toyota managers and their suppliers see each other every day makes for a sort of hothouse culture — rather like Silicon Valley in its early days” (The Economist, 2005a, p. 66). Ironically, while Toyota has rapidly grown to become a giant, global, and most profitable automaker, its inventory performance — a key indicator of leanness — has slipped considerably, albeit from a very high level. Its inventory turn declined from a high of 22.9 in 1993 to 10.1 in 2006 while Honda’s has steadily improved from 1979 to 2000 (Schonberger, 2008). It seems to be violating its own principles as quality problems have been reported. Schonberger (2008), an expert on worldclass manufacturing, hypothesizes that Toyota’s development of its human talent has not kept up with its accelerating growth. Also, the global nature of its growth
March 15, 2010
598
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
R. Nat Natarajan
clashes with its traditional insularity, i.e., the fit between TPS and its location of all Japanese suppliers in close proximity is unraveling. The global supply chain it has to manage now does not lend itself easily to relational contracts. The learnings of TPS seem to have faded and require a re-emphasis. It is finding it difficult to transmit the values and the system to its new generation of plants and workers. One of its manufacturing experts, Mr. Agata, regards his job as inculcating the virtues of the TPS in a younger generation (The Economist, 2005a, p. 66). It does face huge challenges in replicating its system as it continues to expand by building factories worldwide from Turkey to China to Czech Republic. Dilution of its culture is a real possibility. As of 2005, it had 46 plants in 26 countries, design centers in California and in France, and engineering centers in the Detroit area, and in Belgium and Thailand. Increased complexity such expansion entails could make it a victim of its own success. A related observation is that Dell also finds itself in a similar situation after many years of growth. Now, it is revamping its vaunted supply chain strategy trying to replicate the supply chain of its competitors (Gilmore, 2008). Yet, just a few years ago, competitors were trying to replicate Dell’s direct selling business model and its supply chain strategy which supported Dell’s build-to-order products! New market conditions have forced the changes. Dell’s products have become more like commodities. Also, it is selling more through retail channels and to price-sensitive customers in Asia. Cost has now become more important than speed and flexibility. An important lesson here is that the sustainability — which implies successfully propagating the underlying culture and transferring the system over time to a newer generation in the company — of systems of Toyota and Dell depends critically on a coherent ecological (in the business sense) fit. To the extent changes in business conditions, e.g., rapid global growth, disrupt such a fit, the more difficult the intergenerational replication. In the case of Dell, changes in product and market conditions have made the intergenerational replication of the older systems downright irrelevant. 5. New Challenges Organizations will continue to face new challenges in transferring business and information management systems as they pursue new strategies and new forms of collaborative relationships to stay competitive. This section highlights some of those new challenges. 5.1. R&D and Innovation In recent years, a new paradigm for conducting R&D has emerged. Firms are entering into research partnerships with technology entrepreneurs, smaller firms, national laboratories, and universities. An advantage of this external collaboration is that innovation takes place not inside a hierarchy but in a network. With network
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Transfer of Business and Information Management Systems
599
innovation, a company is able to access the bright ideas of thousands of scientists with diverse backgrounds who are not on its payroll as full-time employees. But, for effectively transferring the knowledge systems through mass participation and collaboration, organizations have to learn to leverage effectively and efficiently the “wisdom of the crowds.” Procter and Gamble (P&G), IBM, and Intel are some of the leading and successful practitioners of network innovation. For P&G and IBM, it was a radical change from their very internally focused R&D structures. P&G has designed a network to implement its “connect and develop” model of open innovation (Huston and Sakkab, 2006). It consists of its key suppliers and technology entrepreneurs worldwide who form a proprietary network and technology intermediaries like Nine Sigma and InnoCentive who are part of an open network. It has also created a process (developed through learning-by-doing) to manage that network. P&G’s innovation success rate has more than doubled and R&D productivity increased by 60% (Huston and Sakkab, 2006). Concerns about leakages of proprietary knowledge and protecting intellectual property favors a closed process for transferring knowledge. The advantages of leveraging a wider pool of expertise available outside favor making the process open. The process for transferring knowledge in the open innovation model has to address the paradox first pointed out by economist Kenneth Arrow. The current owner of know-how or technology by merely describing it or talking about it to the other party can end up revealing and transferring the technology without any compensation (Chesbrough, 2006)! Innovation intermediaries with a strong reputation whom both parties can trust may be a solution to this paradox (Chesbrough, 2006). These intermediaries will therefore play a key role as a honest broker, a trusted third party like the patent office, in the open innovation process. Knowledge-intensive and R&D type processes require interpretation, sense making, and judgment. The key difference is the one between information and knowledge. Routine business processes involve mostly information transfer. Today’s information and communication technologies enable this in very efficient and effective ways. But in processes which are fuzzy and ill-defined, knowledge and meaning become critical. They are a lot harder to transfer because knowledge and meaning are context dependent, and the same context cannot easily be recreated (Brown and Duguid, 2000b). R&D and innovation are beginning to be outsourced to overseas locations (Engardio and Einhorn, 2005) as well. Organizations that are doing this will have to face the challenge of transferring ill-defined processes. At the next level, in terms of increasing complexity and abstractness, we have the transfer of ethos, codes of conduct, and values. As higher education goes global, educational institutions are expanding their services globally by partnering with other institutions. For instance, Northwestern University’s Kellogg Business School offers courses through local partners in Israel and Hong Kong. It controls the curriculum, inspects standards, and issues qualifications. But the actual teaching is outsourced and off-shored (The Economist, 2005b). It is clear that such transfer of educational and learning processes involves more than just having the same syllabi
March 15, 2010
600
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
R. Nat Natarajan
and course content in different locations. Protection of the institution’s reputation, brand, and other intangibles such as ethos are at stake here. 5.2. Emerging Economies In emerging economies, effective transfer and diffusion of best management practices from within the country and/or from abroad play an important role in fostering economic growth. In a World Bank report on India published in 2007, according to Goel et al., (2007), all firms producing at the level of local best practice implies that the output of the Indian economy could be as much as 4.8 times higher if enterprises were to absorb and use the knowledge that already exists in the economy. The same authors conclude that since local best practice is probably lower than global best practice, India’s economy could benefit even more if it were able to get all its firms to use techniques and knowledge closer to the global best practice. Transferring management systems to emerging economies have to take into account some unique local business requirements such as quantum leaps in price performance, scalability to large numbers very quickly, and robustness to overcome the poor physical, informational, and educational infrastructure (Prahlad, 2004). Aravind Eye Care System in India illustrates the importance of these factors. It performs more than 200,000 cataract operations a year at one-tenth of what it would cost in the United States, without sacrificing quality which must include post-operative care (Prahlad, 2004). Sixty percent of its patients get operated on for free. These patients are mostly semi-literate who have had no or very limited medical care of any type. Aravind has been doing this for about 20 years or so and has made profits. According to the founder of Aravind, his eye care delivery model is based on Henry Ford’s assembly line and MacDonald’s service processes! A related phenomenon is multinational corporations (MNCs) from emerging economies engaging in what is termed as “reverse diffusion.” It is the process by which practices developed by foreign subsidiaries is captured by the center of the firm and diffused into other subsidiaries. The subsidiaries may be operating in economically advanced economies while belonging to an MNC from an emerging economy like China. Zhang and Edwards (2007) have analyzed the reverse diffusion strategies of Chinese state-owned MNCs with subsidiaries in the United Kingdom. These MNCs were new entrants to global markets with no experience in market competition. Learning and diffusion were central to the corporate strategy and were given high priority. Diffusion took place through various modalities such as training of home firm managers by subsidiaries and informal networking. Over 500 UK-trained home managers are working at top and middle levels in home firms in China. Mostly human resource management systems and processes were targeted for change, but only one company changed its business and management system as a result of diffusion! In most companies, there was little impact on the
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Transfer of Business and Information Management Systems
601
top management, company structures, and decision making. Zhang and Edwards (2007) provide several explanations for the limited impact and discuss some of the new learning strategies being developed to promote reverse diffusion. The lesson here is that much research remains to be done on this new area important to the growth of companies in emerging economies. 5.3. Transfer to Healthcare A trend in healthcare industry is learning from other industries, e.g., from airlines for reducing medical errors and from manufacturing for reducing waste. Hospitals are experimenting with the principles of TPS (Spear, 2005). Whether making a car or a patient healthier, the approach fundamentally is about eliminating waste — from paperwork and inventory to waiting-room delays and extraneous surgical tools about being customer-centered and (Graban, 2008). At Virginia Mason Medical Center in Seattle, doctors’ offices, schedulers, and the laboratories are located adjacent to examination and treatment rooms to reduce walking distances for patients. “In adopting the Toyota mind-set,” CEO Kaplan said, “the 350-bed hospital has saved $6 million in planned capital investment, freed 13,000 square feet of space, cut inventory costs by $360,000, reduced staff walking by 34 miles a day, shortened bill-collection times, slashed infection rates, spun off a new business and, perhaps most importantly, improved patient satisfaction” (Connolly, 2005). The now-bankrupt GM had about 1.25 million people in its health plans, and was the largest private purchaser of healthcare in the United States. It worked with partners in the healthcare industry to teach them principles of lean organization such as standardized work; workplace organization and visual controls; error proofing; employee process control; planned maintenance; and reduction of variation (Shapiro, 2000). The transfer of such systems to healthcare has to address the special nature of the industry (Natarajan, 2006). For instance, consider three characteristics, among others, of healthcare industry that have a bearing on the effectiveness of the transfer. (1) There are powerful subcultures in healthcare organizations based on occupation and specialization, e.g., physicians, nurses, and pharmacists. Their interests and functional orientations do not facilitate a systems approach to the promotion of safety and performance improvements, (Zabada et al., 1998). (2) In many healthcare institutions, there are dual lines of authority — one involving the medical staff and the other the administrative staff. This complicates decision-making concerning design and implementation of safety improvement projects. In other industries, the managerial core has control over the technical core. (3) Healthcare organizations in Western countries are concerned about litigation in the context of tracking and reporting medical errors. This inhibits the relevant information from being shared within and between healthcare organizations. Legal constraints on access to and sharing of patient-related information also prevent the dissemination of the knowledge, which could be useful in preventing errors.
March 15, 2010
602
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
R. Nat Natarajan
6. Conclusion While the problem is really an old one, the issue of how to successfully transfer management systems and practices has gained saliency because of the reasons mentioned in the introduction. This chapter has analyzed three approaches for addressing this issue: (1) The na¨ıve or one-size-fits-all approach, which has typically proved to be ineffective except in some special circumstances. This approach is the least flexible of the three and is often carried out through command and control. (2) The approach that recognizes that systems and processes simply cannot be installed offthe-shelf but have to be tailored to take into account local factors and circumstances. While this is an improvement over the first approach, it is often ad hoc and companies do not always have a coherent strategy for implementation. (3) The approach that has proved to be most successful combines: (a) discovering the fundamental recipe that needs to be transferred (called its DNA) and the nature of the fit with the business environment that supports it; (b) implanting it in a conducive organizational culture. The internal and external architecture of this culture is based on relational contracts. This type of culture represents the fertilized soil or the crucible in which the seeds (i.e., the systems and processes) can be planted. It provides the flexibility to address variations across markets, countries/regions, plants, strategies, and other factors. Decentralization and self-organization are the key characteristics of this approach. These characteristics are attained by empowering employees to solve problems and engage in continuous improvement at the lowest levels in the organizations. Toyota and Dell exemplify this approach and its effectiveness. But the deployment of this approach takes time, because it is based on changing or building a new culture. And that, at any level — process, organizational, supply chain or national — is not an easy task. Moreover, even for companies like Toyota and Dell, replicating the systems over time or sustainability cannot be taken for granted if the business conditions change. The relevance and significance to managers of transferring various types of management systems are only going to increase with current developments such as globalization of higher education, new ways of performing R&D and innovation, the rise of MNCs from emerging economies, and learning from other industries and sectors. These developments present new issues and challenges that — from an academic standpoint — are in fact significant and promising research opportunities. References Bounds, G (1996). Toyota supplier development. Cases in Quality, 3–25. Boston, MA: Irwin. Bounds, G, A Shaw and J Gillard (1996). Partnering the Honda way. Cases in Quality, 26–56. Chicago: Irwin.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Transfer of Business and Information Management Systems
603
Brown, JS and P Duguid (2000a). Learning in theory and practice. The Social Life of Information, p. 124. Boston, MA: Harvard Business School Press. Brown, JS and P Duguid (2000b). Practice makes process. The Social Life of Information, 91–115. Boston, MA: Harvard Business School Press. Chesbrough, H (2006). Open Business Models: How to Thrive in the New Innovation Landscape. Boston, MA: Harvard Business School Press. Christensen, CM and ME Raynor (2003). Why hard-nosed executives should care about management theory. Harvard Business Review, September, 67–74. Connolly, C (3 June 2005). Hospital takes a page from Toyota: Quality, cutting waste at Seattle medical center. Washington Post, Online Edition. Cook, S (2005). Kanban implementation: Selected basics. Target, 21(1), 49–51. Crosby, T (2008). Personal interview by the author. http://iweb.tntech.edu/ll/Mayberry series.htm Dell, M and K Rollins (2005). HBR interview: Execution without excuses. Harvard Business Review, March, 102–111. Engardio, P and B Einhorn (21 March 2005). Special report on outsourcing innovation. Business Week, 86–100. Friedland, J and L Lee (8 October 1997). Wal-Mart changes tactics to meet international tastes. The Wall Street Journal, Online Edition. Gilmore, D (10 April 2008). New supply chain lessons from Dell. Supply Chain Digest. http:/ /www.scdigest.com / assets / FirstThoughts / 08 - 04 - 10.php?cid = 1609 [10 September 2008]. Goel, VK, C Dahlman and MA Dutz (2007). Diffusing and absorbing knowledge. In Unleashing India’s Innovation: Toward Sustainable and Inclusive Growth, Chapter 3, MA Dutz, (ed.), 83–103. Washington, DC: The International Bank for Reconstruction and Development, The World Bank. Graban, M (2008). Lean Hospitals: Improving Quality, Patient Safety, and Employee Satisfaction. New York: Productivity Press. Haigh, RW (1992). Building a strategic alliance: The Hermosillo experience as a FordMazda proving ground. Columbia Journal of World Business. Reprinted in Global Operations Management by M. Therese Flaherty, McGrow-Hill 1996, pp. 219–234. Hall, RH (1983). Zero Inventories. Burr Ridge, IL: Dow-Jones Irwin. Huston, L and N Sakkab (2006). Connect and develop. Harvard Business Review, March 58–67. Intel (2008). Intel Backgrounder: “Copy Exactly” Factory Strategy. http://www.intel.com/ pressroom/archive/backgrnd/copy exactly.htm [10 September 2008]. Kay, J (1995). Relationships and contracts. In Why Firms Succeed, Chapter 5, pp. 63–80. New York: Oxford University Press. Liker, JK (2003). The Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer. New York: McGraw-Hill. Liker, JK and TY Choi (2004). Building deeper supplier relationships. Harvard Business Review, December pp. 104–113. Liker, JK and D Meier (2006). The Toyota Way Fieldbook. New York: McGraw-Hill. Litvin, D (2003). The corruption of the Moguls, the English East India Company. A warlike tribe: Cecil Rhodes and the British South Africa Company. In Empires of Profit, Chapters 1 and 2, 11–70. New York: Texere.
March 15, 2010
604
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
R. Nat Natarajan
Lore, J (2004). Five lessons from a U.S. – English merger. Quality Progress, 37(11), November, 30–33. Mann, D (2005). Creating a Lean Culture. New York: Productivity Press. Martinez, MJ (6 February 2005). Mergers are the fastest and safest path to growth. The Tennessean, Business Section. Mercier, DJ (2002). A global approach to ISO 9000. Quality Progress, 35(10), 56–60. Monden, Y (1998). Toyota Production System: An Integrated Approach to Just-In Time. Atlanta, Georgia: Institute of Industrial Engineers. Mutschler, AS (2005). Intel Extends “Copy Exactly” to Maintenance, Electronic News. Retrieved from http://www.allbusiness.com/electronics/computer-electronicsmanufacturing/6244853-1.html [10 September 2008]. Natarajan, RN (2006). Transferring best practices to healthcare: Opportunities and challenges. The TQM Magazine, 18(6), 572–582. NIST (National Institute of Standards and Technology) (2008). Baldridge Criteria. www.quality.nist.gov Nishiguchi, T and A Beaudet (11–14 July 1999). Fractal design: Self-organizing links in supply chain management. In Proceedings of the Fourth International Symposium on Logistics, pp. 3–30. Florence, Italy. Operations Management (2004). Top Ten OM Videos: JIT at McDonald’s. McGraw-Hill Irwin DVD, New York, NY 10020. Pfeiffer, E (1 July 2003). Chip off the Old Block Intel ensures quality by using a slavishly identical process in all its plants. http://money.cnn.com/magazines/business2/ business2 archive/2003/07/01/345282/index. htm [10 September 2008]. Prahlad, CK (2004). Fortune at the Bottom of the Pyramid: Eradicating Poverty Through Profits, Chapter 2. Pennsylvania: Wharton School Publishing. Roberts, J (2004). The Modern Firm: Organizational Design for Performance and Growth, Chapter 2, p. 39; Chapter 4, p. 166. Oxford: Oxford University Press. Schonberger, R (1996). World Class Manufacturing: The Next Decade: Building Power, Strength and Value. New York: Free Press. Schonberger, R (2008). Best Practices in Lean Six Sigma Process Improvement: A Deeper Look, Chapter 14, (161–167) Hoboken, New Jersey: John Wiley and Sons. Shapiro, JP (17 July 2000). Taking the mistakes out of medicine. U.S. News & World Report. p. 129(i3), 50. Shingo, SA (1983). Revolution in Manufacturing: The SMED System. Tokyo: Japan Management Association. Simard, C and RE Rice (2007). The practice gap: Barriers to the diffusion of best practices. In Rethinking Knowledge Management: From Knowledge Objects to Knowledge Processes, McInerney CE and Day R (eds.), 87–123. Berlin: Springer. Sitkin, SB and D Stickel (1996). The road to hell: The dynamics of distrust in an era of quality in trust. In Organizations, Frontiers of Theory and Research, RM Kramer and Tyler TR (eds.), 196–215. Thousand Oaks, CA: Sage. Smith, BM (18 February 2005). Management systems as a value-add: Maximizing performance, reducing waste, and driving improvements. In Proceedings of Excellence in Tennessee Conference. Tennessee Center for Performance Excellence. Spear, SJ (September 2005). Fixing healthcare from the inside today. Harvard Business Review, 83(8), 78–91.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
Transfer of Business and Information Management Systems
605
Spear, S and HK Bowen (September 1999). Decoding the DNA of the Toyota production system. Harvard Business Review, 77(5), 97–106. The Economist (13 November 2004). A world of work: A survey of outsourcing, 3–12. The Economist (27 January 2005a). The car company in front, 65–67. The Economist (25 February 2005b). Free degrees to fly, 67–69. Terwiesch, C and Y Xu (2004). The copy-exactly ramp-up strategy: Trading-off learning with process change. IEEE Transactions on Engineering Management, 51(1), 70–84. Thomas, C (20 February 2005). Opinion. The Tennessean, 21A. Tonkin, LAP (2005). Crown international’s aggressive lean manufacturing strategy. Target, 21(1), 44–48. Womack, JP and T Jones (1996). Lean Thinking: Banish Waste and Create Wealth in Your Corporation. London: Simon and Schuster. Womack, JP, DT Jones and D Roos (1991). The Machine that Changed the World: The Story of Lean Production. New York: Harper Perennial. Zabada, C, PA Rivers and G Munchus (February 1998). Obstacles to the application of total quality management in healthcare organizations. Total Quality Management, 9(1), 57–59. Zhang, M and C Edwards (December 2007). Diffusing “best practice” in Chinese multinationals; the motivation, facilitation, and limitations. International Journal of Human Resource Management, 18(12), 2147–2165.
Biographical Note R. Nat Natarajan is the W. E. Mayberry Professor of Management in the College of Business at Tennessee Technological University. His teaching and research interests are in the areas of operations and supply chain management, quality and performance management, and global operations. He has published in journals such as the International Journal of Operations Management, International Journal of Production Economics, Total Quality Management, Decision Sciences, and Quality Progress. He has coauthored and coedited books on manufacturing processes and technology transfer. In a study published in 1993 in the Journal of Operations Management, he was recognized as one of the “Top 100 Researchers” in Operations Management during a 5-year period. He has served three times as an Examiner for the Malcolm Baldrige National Quality Award of the United States.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch24
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Chapter 25
Toward Digital Business EcoSystem Analysis AURELIAN MIHAI STANESCU∗,¶ , LUCIAN MITI IONESCU† , VASILE GEORGESCU‡ , LIVIU BADEA§ , MIHNEA ALEXANDRU MOISESCU∗, and IOAN STEFAN SACALA∗,∗∗ ∗ University Politehnica of Bucharest/ Automatic Control and Informatics Department, Romania † Illinois State University/Department of Mathematics, USA ‡ Craiova University, Department of Mathematics Expert Systems Laboratory, Romania § National Institute for Research in Informatics (ICI) † [email protected] ‡ v [email protected] § [email protected] ∗∗ [email protected] ¶ [email protected] [email protected] http:// www.pub.ro
This chapter is concerned with a new approach to metamodeling-oriented synthesis of the complex, non-monolithic, (Internet-distributed system) adaptive system of Systems, targeting the synergetic research issues of Digital Business EcoSystems (DBES). The starting point is focused on an attempt to identify solid scientifical “roots” of the DBES. A so-called “Terra-like reference model” is provided to introduce the Digital World Theory (trademark Lucian M. Ionescu) that is based on the new key concepts like qubitfocused information flows at the meta and modeling level (UML-oriented platform, ARIS, ADONIS) (http://www.virequest.com). The Romanian Virtual Team (Bucharest-Ro, Craiova-Ro, Springfield, USA) adds some original results with respect to holistic approach for both conceptual design and synthesis methodology of the DBESystems. The main research directions address the following topics: System of Systems (A.M. Stanescu, M.A. Moisescu, I.S. Sacala), Data Mining and Open-Source-oriented tools (V. Georgescu) and, last but not least, computational biology (L. Badea). The chapter also summarizes some European results that have been provided during the IST (Information Society Technology) European Commission-funded project like DBES [http://www.digital-ecosystems.org/]. Two important messages are to be sent toward the target audience: (1) Taking into consideration the three categories of “EcoSystems” (1996) Digital Information and Communication Technologies EcoSystems, Business EcoSystems, Innovation EcoSystems. The 607
March 15, 2010
608
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al. DBES synthesis is a mission to be completed in the next few decades. To live and to work within “Blue Ocean” (concept introduced by Dr. Kim) is concerned with both dissemination and successful stories implementation of Collaborative Network Organization for global e-economy supporting e-democracy. (2) A convergent transdisciplinary research effort should be necessary to harmonize among various new scientifical disciplines: microeconomics and macroeconomics, social networks, e-services sciences computational biology, bioinformatics, a.s.o. Finally, the work-in-progress research is presented and the Living Laboratory/Virtual Team are Ecolead Collaborative Network Organizations. Keywords: Digital business eco system; digital world theory; data mining.
1. Introduction During the Lisbon European Council (March, 2000), the European Union representatives set the goal of making Europe the world’s most dynamic and competitive Knowledge-based Economy (KBE) with the need to promote “the Information Society (IS) for all.” According to this mission to be completed next decade the “Networked Enterprise and Radio Frequency Identification (RFID and EN)” unit of the European Commission’s “IS” and Media aims at facilitating the emergence of future innovative business models within Global Economy and “e-market”-based platforms (e.g., InnovaFun Workshop, May 2008, Brussels). These new models are concerned with exploiting the new business opportunities (e.g., IST-IP-project “COIN,” 2007). They are going to manage the challenges posed by the socioeconomic and technical support developed in the twentieth century. Everybody could recognize that: “Business requires for new technologies, applications, and services to enable them to work as Networked, Knowledge-based, Enterprises,” Gerald Santucci, Head of Unit “NE & RFID.” The Framework 6th-IST-project “Virtual Organization road MAP” (CamarinhaMatos, 2003) provided also an interesting assessment “Every sustainable enterprise should have beyond 2012 the capability to networking in Virtual Organization within turbulent, global market.”
Santucci also stressed that “the concept of Digital Business EcoSystem (DBES) initiative responds ideally to this challenge of creating Information and Communication Technologies (ICT) instruments together with collaborative practices and paradigms that support economic growth and include all the societal and economic actors in the process. It has been commonly recognized as a new frontier for Research and Technology Department (RTD) in the KBE. Indeed, Small and Medium Enterprises (SMEs) and local clusters are now competing in a global and dynamic market, where they need more interrelations, more specialized resources, more research and innovation as well as access to global value chains and knowledge. The research driven within the DBE initiative supports all these necessities by offering an open infrastructure that combines”: — Human capital
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 609
— Knowledge and practices — Technical infrastructure — Business and financial conditions all modeled within the European industrial policy agenda. The concept of DBES has been coined since 1995 by American scientists. During the IST-RTD, 6th Framework Program successful projects [Athena, Ecolead, Fusion, Genesis, and Interop], the European Research community provided a lot of encouraging results that supply some technology transfer for first generation of “pilots.” Nevertheless, there is still a family of solid concepts around these topics like “Collaborative Network Organization (Camarinha-Matos et al., 2008), Nonmonolithic, complex Adaptive System of Systems (Stanescu and Dumitrache, 2007). Taking into consideration the genuine concept of ECOSYSTEM (Common and Stagl, 2006), an ecosystem is a metasystem comprising the following macrosystems: (i) E-BIO → living organisms (BIOTA) (ii) E-ABIO → non-living environment (ABIOTIC factors) eco : {E-BIO; E-ABIO; R, G} Where: R → set of relationships (in the metasystem) G → goal of a certain sustainable ecosystem Key Concept An ECOSYSTEM is a system comprising: Entities: • E-BIO → living organisms (BIOTA) • E-ABIO → non-living environment (ABIOTIC factors) eco : {E/biotic union E/abiotic; R} → E-BIO: set of living organism-based entities → E-ABIO: set of non-living environmental factors (entities) R is a set of relationships among ecosystem components/entities Remarks (1) The delineation of the boundary of an ecosystem is a matter of judgment, and depends to some extend on the purpose at hand. → Extremely small ecosystem: small spatial extent (e.g., a pond, a small woodland area)
March 15, 2010
610
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
→ Extremely large ecosystem: entire biosphere-SINGLE ECOSYSTEM (less detail level) (2) The world is divided into large areas of similar climates and plant life, in which large ecosystems are referred to as BIOMES. SINERGIA findings (3) The ecosystems have generic structural features in common but they have behavioral features and models that are very dissimilar. An ecosystem is an assembly of many interacting populations together with their ABIOTIC environment. Even in a small, localized ecosystem, the population interactions will be many and complex. The relevant R&D investments will be essential for the prosperity of industrialized countries. But the de-localization, as well as the risk of de-industrialization is becoming increasingly serious, owing to outsourcing and off-shoring trends and to increase competition coming from low-wage countries, such as China and India. We are talking about manufacturing, despite the re-balancing between the manufacturing industry and the services industry in the context of a global and digital economy, worldwide (Stanescu et al., 2008), but one could notice that the great challenge to transform the high entropy-based IS into the e-democracy-focused Knowledge-based Society (KBS) concerns the re-launching of the Internet-based manufacturing industry within a worldwide village. During the last five decades, a paradigm shift occurred from the numerical controlled machines (1945–1955) and manipulating robots (1947–1957) toward Intelligent Manufacturing Systems (IMS) (Choi and Nagy, 2006). Taking into consideration the organizational view, the “ENTERPRISE” has evolved from “intra-muros” (Latin), advanced Computer Integrated Manufacturing and Engineering (CIME), 1967–1986, toward networked e-Enterprise (Extended Enterprise (EE), 1980, Virtual Enterprise, 1990, Collaborative Networked Organization, 2000–2008 (Camarinha-Matos et al., 2008). One of the great challenges that could be a key driver of the next generation in the manufacturing industry in the coming years is the non-monolithic (extra-muros) geographically dispersed enterprises.
B (biomass)
Pn
20
40
Figure 1.
60
80
Timp (ani)
Systems dynamics.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 611
But, new problems must be identified! The “top-layer” of actual EE concerns with “business-to-business” interoperability-based inter-enterprise collaboration organization (Li et al., 2006). The degree of complexity rises exponentially, as well as the modeling uncertainties degree of this large scale of metasystems. Despite a lot of definitions for DBES tracing the scientific one from buzzword to the solid founding concept, the authors propose a graphical predefinition shown in Figs. 2–4. This chapter is concerned with the holistic, multiview, coherent, consistent systemic viewpoint consolidation, but the solid conceptualization comes from fundamental sciences: digital mathematics, quantum physics (QP), computer science (artificial intelligence) and, last but not least, biology and genetics. The “logo” supporting our work is “beyond the segmented advanced research should be a synergy-oriented GRAND UNIFICATION.” Finally, a new methodology of conceptual synthesis is provided. The vision and roadmap for the next generation of sustainable Internetnetworked Enterprises have a list of challenging qualitative features like: social/legal/framework/entrepreneurial/interoperability/customer satisfaction/total quality management a.s.o. If somebody uses the triplet Product/ Services/ Organization (PSO), the Collaborative Distributed (worldwide) Dynamic Enterprise (CD2E) which is based on Smart Concurrent Engineering design methodology, Intelligent Manufacturing, BLUE
OCEAN ur lt ba t lo e G ark
ta
Da
ul
b ur
ta Da
ta
Data
Data
D
Figure 2. Terra metamodel of DBES development.
nt
Da
a at
Intelligent information systems
Glo ma bal t rke urb ule t
ule
DBES
Biology and genetics
ta
rb l tu a ob t Gl rke ma
Complex adaptive systems
Da
nt
Computer science
Quantum physics
Data
Discrete mathematics
nt
ta
DBES
Social economics organizational systems
le
bu
Resource allocation based on general system theory
Da
lt ba t lo rke G a m
Data
m
t en
March 15, 2010
612
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
Automatic control science
e-Service science
B Business science
E
Communication science
S Management science
D Digital Business EcoSystem multi/inter disciplinary domain
Cognitive science
Web science Artificial intelligence science
Figure 3.
Graphical-based predefinition of DBES concept.
Figure 4.
DBES structure and implementation.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 613
and, last but not least, Peer-to-Peer Distributed Knowledge-Based Systems (P2PDKbS) should offer an attractive goal for thousands of research laboratories, universities, and R&D departments from large transnational companies worldwide. The new digital and global economy requires the effort reactive Collaborative Networks to be sustainable on e-markets (Nof, 2006). The “intersection” between Virtual/EE and Virtual/Open University (Stanescu 2000; ICE, Toulouse) is going to consolidate the Learning Organization solid concept (Fig. 5).
2. Paradigms Shift in 3rd Millennium The large companies are nowadays “overheated” with “ICT”. One could hardly select the most appropriate Computer-Aided Design (CAD) commercial products during a “crawling” session among 100 offers, from “Solid Modeling” toward CATIA 6.0 “Daussalt-Systems”. The well-known MONITOR/ANALYZE/PLAN/EXECUTE (MAPE) has been provided by the Department of Defense, USA (Air force Doctrine Center 1998) some time ago, but we also stress another J.R. Boyd FEEDBACK PARADIGM O.O.D.A.(OBSERVE/ORIENT/DECIDE/ACT) addressing the decision makers for every domain of the real economy (Fig. 6). The “STIMULATION–HYPOTHESIS–OPINION–RESPONSE (SHOR)” paradigm is useful for the “classical behaviorist” (psychology) to explicitly deal with uncertainties. In the following, we must consider the meaning of these general features: • • • •
Extending and enhancing scientific knowledge and truth about our existence Using management of existing knowledge and truth about existence Producing new technological knowledge through innovation Unprecedented dissemination of knowledge to address citizens through new channels of modern communication
The five “pillars” of knowledge-focused education (UNESCO Report 1996) “Learning: the treasure within” bring into attention the “step like-staircase” of the “Lifelong Learning” paradigm: Learn to know/Learn to do/Learn to live together/Learn to be/Learn to choose. Choosing presumes “mastery of values, without which people may lose their ability to act, mastery of values is the individual’s capability to prioritize matters based on a personal life experience on his or her capacity to learn.” Skilful competence consists of developing 5D-learning in a stable, harmonious fashion. Taking into consideration this trendy “societal good tsunami” that involves every citizen of the planet in the future (KBE that is sustained by lifelong learning), several research projects have studied the “Virtual Organization” area.
March 15, 2010
14:45
614
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
(a)
(b)
(c)
Figure 5.
(a) 1st–2nd shift; (b) 2nd–3rd shift; (c) 3rd–4th shift.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 615
Figure 6. The PLANNING-OBSERVE-DECIDE-ACT//ORIENT loop.
Owing to the presence of the process, permanent and temporary elements can be distinguished in the system. The permanent elements are the subsystems or components of the assembly system, such as feeding systems, robots, and sensors. These subsystems fulfill functions in the assembly process, and form, through mutual relationships, the structure of the system. The temporary elements are continuously imported into the assembly system and transformed into an output desired by the environment (market). These elements entail a flow of material (product parts), an energy flow, and an information flow. The emphasis lies on the flow of material. Hence, only the flow of material is considered in regard to the output (Fig. 7).
Figure 7.
Graphical representation of dynamical system.
March 15, 2010
616
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
A paradigm shift is required to dynamically meet the twenty-first century experts’needs for exploiting information as well as to speed up the decision-making processes (Stanescu and Dumitrache, 2007). 3. The Digital World Theory The DWT is neither The Theory of Everything nor a Final Theory. It is a proposal for a “Grand Unification,” not only as a unified description of the fundamental forces and of complex systems alike, but also of theoretical contributions from mathematics, physics, and computer science. One cannot understand any of the foundational concepts, e.g., space, time, matter, information, mind, conscience, etc., without finding a unified approach to understand all of them. To understand quantum gravity for instance, we adopt the view of Gauss, Euler, Grothendieck, etc. to broaden the goal, rather than trying to simplify our task and focus on a unification of quantum mechanics and general relativity only, without modeling the other interactions (e.g., Loop Quantum Gravity [Rovelli-1]). 3.1. What is a Causal Structure? Causal structures appeared long time ago in QFT in the disguise of approximating schemes: Feynman graphs (perturbative approach to QFT). The present author suggests, still in search of the appropriate mathematics model for space-time, that there is more to it than it meets the eyes: Feynman graphs are a substitute for the possible paths in a space-time. The idea that there should be a more general structure including cobordism categories besides Feynman graphs emerged. The term generalized cobordism categories, denoting what would be more general structure, was coined during graduate school before the author became aware of the advent of operands and PROPs. Moreover, later on, if it became clear that there should be no “fixed causal structure,” allowing to account for the “scaling problem,” and also a duality keeping a balance, allowing to trade “geometry” for “physics,” as suggested in [VIRequest-UP] (“packing and unpacking DOFs”). 3.1.1. Zooming In and Out on a System The appropriate technical tool seems to be the insertion and collapse of subsystems (zoom-in and out on a system: how natural.). The precise mathematical implementations is a generalization of Kreimer’s insertion and deletion operations on Feynman graphs, operations which allowed Kramer in his works with Connes (Kreimer, CK1, CK2) to rephrase and finally time renormalization. The present author provides the physical interpretation beyond the mathematical tool itself: the Quantum Dot Resolution (QDR) is believed to be the promising framework for developing (classifying) QFTs, thought of as “Space-Time
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 617
with variable geometry” (causal structure). In a way the interpretation came postfactum, as has sometimes happened in the past. Indeed, as relativity as a conceptual breakthrough, appeared based on the already existing physics and mathematics of Lorentz, Minkowski etc. causal structures have already appeared as operands and PROPs, related to the operator product expansions (OPE). OPEs establish a relation between causality and the external global time. In some sense, introducing macroscopically time consistent with the classical limit corresponds to OPEs and quantization. 3.1.2. A “New” algebraic-geometry principle The concept of “variable geometry” imposed by the modern homological algebrageometry principle “forget the space, use a resolution . . . ” had already found a “back-door entry” into discrete models (lattice models and Monte–Carlo simulations). The Feynman causal structure is not just a perturbative approach, it is rather a substitute, and in fact a generalization of the concept of Space-Time; moving from (loop) degree to degree, as in Hilbert’s “syzygy theorem,” is an approximating procedure. Kreimer’s insertion-elimination operations should be interpreted as “SpaceTime bubble fluctuations” of the causal structure: “Dirac’s vacuum.” At a more technical level, the general framework is that of a 2-category of “generalized cobordisms,” where extensions (e.g., graph extensions) play the role of “homotopies.” 3.2. String Theory and Entangled Thumbs The pros and cons regarding String Theory are a great divide: one thumb up and the other one down. To start with the bad news, the Nobel laureate David Gross’exasperation regarding String Theory: “We don’t know what we are talking about.” [ST], it is due to so many decades of unfulfilled hope; string theory looked like a promising theory “. . . and promising . . . and promising,” yet without any soon delivery, in terms of experimental prediction or verification. It is well agreed: new ideas are needed [ST, D1], starting with String Theory’s lack of explanations regarding “. . . where space and time come from,” while providing a sense of an academic exercise since the equations “describe nothing we could recognize.” The “good news” is the idea (within string theory) counts. The Feynman philosophy pointed towards the mathematical structure I call Feynman Process, as a representation of Feynman causal structure, and the benefits of using Riemann surfaces, a historical load we have to re-examine (e.g., “fewer” equivalent transitions which are naturally relativistic since interactions are not point-wise localized, etc.)
March 15, 2010
618
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
are overcome by the complexity leading to “rigidity”; on top of this they do not poses a “computer friendly interface” (In fact they do, but under the guise of ribbon graphs, etc.). But the killer feature is that they need to float in some background space. What can we do to save the day? Comparing with the principles of DWT, the missing conceptual principle is the duality between internal and external DOFs. With Riemann surfaces, internal DOFs come as vertex operators, and after representing the Riemann surface “prop” (PROP), one gets a “clean” algebraic structure: vertex Operator Algebra (VOA). What it lacks is a “graph differential” allowing to insert/collapse EDOFs, in duality with a corresponding differential (L-infinity structure?) of the VOA. The present stated principles cope nicely with the simpler, toy model structure of graphs. The difficulty of putting a space-time structure on them, in order to have Poincare invariance and therefore relativity, is avoided by categorizing classical physics first 3. The idea is to “forget” manifolds, and have a categorical substitute for the phase space with external symmetries (Poincare group) dualized as internal DOFs. The current proposal is crude, yet promising. 3.3. A “New” Principle Returning to General Relativity, perhaps the most important consequence, beyond expansion of universe and Hubble’s constant, is the concept of black hole. The unification of GR and quantum theory was initiated by S. Hawking as an extension laws have been identified. The first law relates temperature as a measure of energy per DOF, with acceleration as a measure of the interaction (Newton’s sense). Unruh’s Law: Temperature/h = Acceleration/c It expresses a principle, therefore the simplest (physicist favorite) way is enough: linear relation. Together with Einstein’s equivalence principle, it suggests that there is an energy distribution for the 2-point gravitation correlation (in our quantum discrete picture). The second law: Beckenstein’s Law: h Entropy =
1 ∗ Area/(8π) k
relates entropy, as a quantity of information needed to completely specify a state (the “quantum memory size”) and area, which in a discrete (geometric) model should be thought of as a measure of the possible in/out interactions (“quantum channel capacity?”). Beyond the “global statement,” adequate for stating an equivalence principle, there should be here a “local/discrete” Stokes Theorem at work . . . (?)
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 619
It is reassuring to find Lee Smolin mentions implicitly such a “would-be” principle: “one pixel corresponds to four Plank areas,” although it could rather be “one interaction qubit corresponds to four Plank areas.” Later, he drives some conceptual implications which are evaluated as not admissible. If there is no theory to back them up (we have learned a lot from the old story: “Euclid’s parallels,” axiom or not? Let’s derive the “unbelievable consequences” first, then decide how to build the theory). Finally, the third law relating temperature and mass, but in an opposite way as the first law is Hawking’s Law: Temperature = k/Mass
or
alternatively: Mass = kβ (with an eye on the entropy: Boltzmann’s correspondence, etc.). It refers to the radiation capability of a black hole (“density of I/Ointeractions”?), rather then its energy distribution per DOF. The situation is reminiscent of Newton’s position when simplifying Kepler’s laws . . . so let’s look for a “new” unifying principle (we do have a “situation” here, right?). The chapter skeleton follows a deductive reasoning to contribute at the DBES “one from many” methodology of such category of Complex Adoptive nonmonolithic meta-meta systems synthesis (Fig. 8):
Systemic paradigms
Modeling framework
Figure 8.
“Top-down” methodology of DBES analysis.
March 15, 2010
620
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
Contextual system
Degree of complexity and degree of modeling uncertainty
Information processing systems
Social economic organizational system
Figure 9.
DBES-focused complex adaptive systems taxonomy.
From General DBES features
Systemic requirements
— Scalability — Portability — Traceability (life cycle management) — Interoperability — Multiagent heterogeneous platform — Complexity — Availability
— Adaptivity — Autonomity — Self-organizing — Decomposability — Decidability — Viability — Manageability
But, to add value, a new holistic approach for DBES (Fig. 9), synthesizing methodology, needs a solid conceptual foundation. Despite the 2-year old initiative provided by Prof. Lucian M. Ionescu (University of Illinois) who was born in Romania to develop an appropriate “Universe of Discourse,” which sustains the EASOS category like DBES, which is still in progress, one could discover a potential e-collaborative platform to cope formally with a high degree of complexity and modeling uncertainties that characterizes any DBE System. The challenge of great unification should start with this “kernel.” The author “undertook” under consideration the comprehensive, scalable, and systemic business model, based on Ludwig von Bertalanffy’s Informal Survey of Main Levels in the Hierarchy of Systems (pursuant to Kenneth Boulding). The General Systems Theory (GST) keys into von Bertalanffy’s statements that: “General systems theory should further be an important regulative device in science” and that “the existence of laws of similar structures in different fields makes possible the use of models which are simple and better known, for more complicated and less manageable phenomena.”
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 621
Ervin Laszlo remarked that von Bertalanffy both created “a new paradigm for the development of theories” and gave us a new paradigm for trans-disciplinary syntheses. Edward T. Hall in his book, The Hidden Dimension wrote: “The scientist has a basic need for a classification system, one that is consistent as possible with the phenomena under observation and one that will hold up long enough to be useful. Behind every classification system has a theory or hypothesis about the nature.” The Quantum Information Processing represents a two-way Turning-ChurchDeutsch principle. We therefore need a quantum computing (QC) language, Q++, to model and design “reality”: the Quantum Software and Quantum Hardware. The elementary QC gates, as a basic instruction set, are the “elementary” particles and the corresponding “fundamental” interactions, except that, from an information processing point of view there is only one: the qubit, with its dual functions of data and program. Its “external manifestation” is the electron, with its “internal counterpart,” the quark; Fermi’s neutrino solution to the balance of momentum and energy is (hopefully) replaced by duality. Some non-conventional ideas are explored away from the traditional ideas are explored from the traditional Standard Model, as part of the Digital World Theory Project. For example, we suggest interpreting a three-dimensional time and space symmetry as a quark color current (internal magnetic charges), in an “external–internal” super-symmetric electro-magnetism (IE-duality of the Hodgede Rham QDR), unifying QED and QCD, with the week interactions coming as a byproduct. Gravity is expected to emerge as an entropic organizational principle, to get a part of the complexities action. At the mathematical implementation level, “the quark” is just a primitive element corresponding to prime numbers under the cyclotomic representation of the universal Hopf ring: integers with multiplication and divisibility (comultiplication). Divisibility is the correct concept in the non-commutative world; not fields of fractions, but rather Hopf Objects. For the mathematician and his/her graduate students we mention: why fields should be replaced by bifields (Hopf objects), as part of the categorification of mathematics (not only of physics!), how the commutative determinant can be generalized as the Feynman Path Integral (FPI), etc. The physicist has “reprogrammed” some old applications: the particleantiparticle dichotomy is a time-oriented/information flow issue; the SU2 × generations (only three?) assumes a grand unifying group extension, while we suggest an infinite basis of Lie elements (primes); how mass can be regarded as a Galois index, consistent with the idea that mass is generated by breaking the symmetry (Higgs mechanism), etc. Q++ is a systematic assimilation of the main concepts in quantum mathematics and physics, organized around a kernel of ideas and principles stated as the
March 15, 2010
622
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
DWT. There is no need to “rewrite” the “old application” in the new language, except for selected cases for testing purposes and to gain “fluency” with the new language. This crucial exciting new application is quantum gravity within the framework of the SM: quarks and electrons (three generations). One big question is: “Can we dispose of the “weak force” as being merely the IE-duality?” Mathematically speaking, a good principle to follow in this rewriting physics and mathematics enterprise is: “Forget fields; rethink them as Hopf objects.” The main advantage of implementing the DWT, as a conceptual framework/ interface to QP, in Q++ (QDR, etc.) is that the implementations is already a quantum theory with no need for quantization or renormalization, being from the start “computer friendly.” There are plenty of research problems suggested, as part of the development of the DWT, which could interest both the expert and the graduate student. In a pictorial sense, Feynmann helped Quantum Physics (QP) and QC get to know each other. Soon after that the developments in QC (Deutsch, etc.) led to an “engagement.” The DWT project advocates the permanence and benefits of this two-way conceptual bridge. In this chapter, we suggest various “non-standard” ideas to be explored in order to design a new “operating system for the quantum reality.” As started by many already, there is a need for a radical change at the level of the foundations of science. We believe that there is nothing wrong with the technical tools developed so far in mathematical physics, from the point of view of their functionality; what has to be changed is the mathematical material used to develop them, which in turn has to be supported by a shift in our conceptual understanding. This can be done if the corresponding physics interpretation benefits from the computer science experience. 3.4. A Generalized Equivalence Principle We start from the above “marriage” between QP and computers science, which naturally leads, at a philosophical level, to a new fundamental Equivalence Principle completing the unification of mater energy and space-time started and developed over 35 years by Einstein. To his unification we add the aspects pertaining to quantum information, with its mandatory changes: since matter and energy are quantized, and so is information (qubits), space-time and “motion” must be quantized too. The new Energy-matter-space-time-information Equivalence Principle is mandated by the general trend in the development of today’s physics. It is also suggested by the evolution of the mathematical models and methods, notably the use of graphs-categories-networks (automata) in modeling complex systems of interacting subsystems. Given a “transition path” with its corresponding symmetry in the spirit of a Galois–Klein correspondence, then FPI of the second equation is the crowning of
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 623
the Lagrangian approach to modern semiclassical physics, while the first equation is a core addition of the DWT, with its categorical implementation of the FPI in the spirit of today’s discrete models (simplified models, Spin Networks and Foams, Loop Quantum Gravity, Lattice Gauge Theory, etc.). General Relativity
Energy-Matter
Entropy-Information
Space-Time
Symmetry
DWT
The conceptual implications of such an equivalence principle (supersymmetry), implemented as the IE-duality, include trading space-time (EDOFs) for matter/mass (IDOF) and quantizing “everything”: energy, matter, space and time, together with information. The external observable properties of an interaction (external level) are modeled using a network/Feynman graph/Riemann surface as an element of the Causal Structure which is implemented mathematically as a Feynman Category. If such an interaction mode admits a space-time coordinate system, then “Space,” thought of as “parallel computing,” can be traded for “Time,” thought of as sequential computing. At the internal level (“conjugate variables”) this “trading-duality” corresponds to a generalized Wick rotation: ε = icP,
ε = E + ikbH,
P = p + icm0.
Therefore NOT space is in need of additional dimensions (e.g., “classical” String Theory), but we need to “blow-up” the “abelian time” to a 3-dimensional nonabelian time (qubit/symmetry flow: SU(2)). 3.5. Methodology: Top-down Design Therefore, regardless of the particular reasons of one group of scientists or another, we need discrete models of finite type; other examples are the Feynman– Kontsevich graphs and our QDR. It has the advantage of being an algebraic approach, i.e., axiomatic, compatible with a top-down design methodology, in contrast with Newton–Leibnitz analysis (bottom-up methodology, constructive), or even Poincare’s topology (“in between”, qualitative yet with a lot of “ pathologies”). MacLane’s category theory comes as the perfect object-and-relations-oriented language for this purpose, where geometry and physics come with the needed “intuition” on top of its often mentioned “abstract non-sense” attribute. We restate the idea that the answer/solution, irrespective of the particular motivation, is, in my opinion: The mathematical model of a (modern) quantum theory should be discrete and of finite type, i.e., finite dimensional in each integer degree, designed top-down from an algebraic-axiomatic interface binded to the physics application, like geometry.
March 15, 2010
624
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
3.6. The Standard Model or a Quantum Programming Language? In this chapter, we push further the correlation between QP and QC, including aspects from the methodology of the later. We aim to present the elementary particle theory as an “object-oriented” language, designed for programming reality (physics experimental setup: quantum hardware and theoretical model: quantum software). Our concern is not a “Theory of Everything,” but the design of a language modeling the complex reality. It will evolve and then be superceded by a better one in the cyclical cycle of research. We therefore are not interested in the “mathematical exception” we rather view as a “pathological situation” (E8, Monster group, etc.) but rather in the generic, flexible, and upgradeable mathematical “materials and technologies.” The present exploratory alternative to the Standard Model is motivated by the mandatory changes in our understanding of what space-time-matter really is. The “technical implementation,” more or less viable, is only hinted, due to lack of time and expertise of the present author, and it is intended first for exemplification purposes. An important point is that the technical tools are already developed (Standard Model, String Theory, Loop Quantum Gravity, Lattice Gauge Theory, etc.), yet they are “written” in the “old classical language”: all we have to do is to rewrite the code in Q++. This explains the style of the exposition, i.e., that of a research report: second phase of the DWT-project. We “put the cards on the table,” since the Linux open source development project showed that the WWW is the “perfect collaborator”. The analogy goes even further; as in any top-down design of an informational system, we design the interface first, represented by names in italics, for which the implementation is just a matter of . . . time or energy. And the Web is the interface to many more skillful mathematics–physics specialists capable of compiling the present authors speculations from the high-level language into solid mathematical physics code. Then what remains is a “link-editing” with the current theories, to get a computational viable theory (“executable code”). The reader probably agrees that the era of a one-person breakthrough theory ended, perhaps with Einstein, and projects like the DWT-Project are suited for a collective effort. 4. Biological Systems as Complex Information Systems 4.1. The Relationship Between Biology and Informatics We are currently entering a new era of “rational” medicine and drug design, based on fundamental molecular-level knowledge about the biological processes involved in the normal functioning of organisms, as well as in various diseases. In the pharmaceutical industry, time, cost, and throughput constraints have begun to significantly limit the much needed development of new drugs for many diseases. The genomic
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 625
revolution in biology and other high-throughput technologies have recently enabled a more rational drug design, leading to new compounds that can successfully combat several previously intractable diseases (such as Gleevec, which is effective against chronic myeloid leukemia, gastrointestinal stromal tumors, and a number of other malignancies). In spite of such notable successes, the tasks faced by this domain are huge, especially due to the daunting complexity of biological systems and processes. The sheer size of the relevant data and knowledge makes their processing by human subjects impossible and thus requires the use of computers. (The NCBI databases store sequence information for more than 56 billion base-pairs, while the number of relevant biomolecular databases and resources on the Web exceeds 500.) However, the use of computers in this domain is not limited to the storage and management of huge collections of data. The integrated in-depth analysis of this data is far more complex and important for extracting biological knowledge as well as for producing experimentally testable hypotheses. In fact, it has become apparent that computer science is “to biology what mathematics is to physics” (Harold Morowitz). In the following, we intend to elaborate on this idea. Figure 10 schematically suggests that in the same way in which physics uses mathematics to represent models of the physical world, biology uses informatics for representing the knowledge about biological entities and processes. In fact, the most important breakthroughs in physics are related to the development of quantitative mathematical models for physical phenomena. Therefore, since biology is in principle “included” in physics (via chemistry perhaps), we may na¨ıvely expect the mathematical models of physical processes to be directly usable for modeling biological phenomena. Unfortunately, from a practical point of view, nothing can be more remote from the truth. While the laws of physics are simple, very general and relatively few, the “laws” of biology are complex, quite specific, and very numerous. (Although the laws of biology are based on the laws of physics, they include a large number of “frozen accidents” or spontaneously broken symmetries, which explain their complexity and specificity.) Therefore, in practice we can hardly use the inclusion 1 and the detailed physical models from Fig. 10 to model biological processes. Modeling biological processes therefore involves developing incomplete models based on experimental data (rather than first principles). Not surprisingly,
Physics
Detailed modeling
Inclusion 1 complexity
Biology
Mathematics Inclusion 2
“Coarse” modeling
Informatics
Figure 10. The relationship between biology, informatics, physics, and mathematics.
March 15, 2010
626
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
Figure 11.
Biology-inspired DBES framework.
the most significant breakthroughs in biology are linked to the breakthroughs in experimental techniques, especially the high-throughput technologies that have witnessed an explosive growth in the past decade (Fig. 11). 4.2. Digital Codes for Analog Biological Processors Biological systems are exceptionally complex processors of information, capable of adaptation to their environment as well as of replication and evolution. However, they are quite different from man-made information processors, which are entirely “digital” processors. In the following, we argue that biological systems employ digital codes that encode and control analog processors, the latter being probably more adaptable to a diverse environment. This distinction between digital and analog processors in biological systems is essential for distinguishing the various aspects of biological systems that can be measured using high-throughput technology, and which will be discussed in the next sections. Indeed, most human information processing artefacts (ranging from the simplest digital computer applications to the most sophisticated artificial intelligence programs) face significant problems related to the interface with their environment. This issue, which has only recently begun to be perceived as one of the main bottlenecks towards the development of more adaptable intelligent systems, is due to the
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 627
simple fact that any digital mapping of the real world will either be too simplistic (and thus incapable of adaptation and evolution), or too detailed to be processed by human-developed software (e.g., robotic vision is way behind the capabilities of the human visual system in terms of recognizing objects and their movements in complex scenes). On the other hand, biological systems use a digital encoding of their own structure (the DNA), which enables a high replicative fidelity and a correspondingly tight control of their evolution, but they are operating in an exceedingly complex “analog” world and thus make use of exceptionally well-adapted analog “processors.” Digital and analog processors are intertwined inextricably, to the point that one would not work without the other. More precisely, the exceptional adaptation of the analog processors is due to the evolutionary mechanisms that are only possible thanks to the digital encoding in DNA of the structure of the biological systems, while the functioning and maintenance of the digital code itself is realized with the help of tremendously complex analog processes. The “digital code” of biological systems is based on the so-called genetic code, i.e., the correspondence between DNA codons and amino acid components of proteins, which is conserved for all biological systems on Earth. The differences between the various living beings, their individual “digital codes” are given by the precise sequences of bases that make up their genomes. The last decade has witnessed an explosive growth of organisms whose genomes have been completely sequenced, ranging from viruses and bacteria to complex multicellular eukaryotes such as Homo sapiens. This explosive growth was facilitated by the digital nature of these codes, as opposed to the much more complex molecular biology of the “analog” processes, which are currently only incompletely known, despite the much more extensive research efforts spent in the domain of molecular cell biology. As previously mentioned, constructing complete mathematical models for multicellular eukaryotes usable for simulation and prediction is a daunting task, as they would have to cover not just the metabolic, signalling and gene expression control networks, but also their enormously complex interactions. Although such complete models are way beyond present technology and knowledge, there are aspects and subprocesses that are within the reach of current high-throughput experimental techniques. 5. Tools to Cope with the Complexity of DBES Complex systems can be defined in a number of ways depending on the different scientific domains approaches. First of all, a system is viewed as a group of interacting, interrelated, or interdependent elements forming a complex whole. A system may have boundaries and may interact with the surrounding environment by exchanging energy and/or matter.
March 15, 2010
628
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
One way to describe complex systems is as a dissipative structure that imports free energy and exports entropy so as to be able to achieve a level of selforganization. From the information science point of view, a complex adaptive system is a system that absorbs information from the environment and creates stores of knowledge. In order to describe complexity we have selected the basic principles as described by Mitleton–Kelly and by Andriani. In order to bring the two approaches together we have to consider the approaches: Mitleton–Kelly considers that complex systems function far from equilibrium, exploring the space of possibilities as a path-dependent process, using their history and timeline and Andriani considers that complex systems are dissipative and function as local knowledge-based agents involved in self-catalytic reactions and characterized as nonlinear and evolutionary.
Based on these principles, we can provide a classification of the complex systems: • First-order complex systems as the imposed energy transformation • Second-order complex systems as control over the acquired energy through the translation of the absorbed information into a connectional knowledge structure • Third-order complex systems as feed-back and feed-forward model driven acquired and imposed knowledge • Fourth-order complex systems as interactive knowledge generated as interconnected models and knowledge sets The implications of complex systems theory have been integrated in the economical disciplines starting with the research in the area of evolutionary economy. According to Cooke, evolutionary economics is based on the following assumptions and characteristics connected to the complex systems theory: 1. The agents have a quasi-global access to information, the optimization process is a local one and the decision-making system is formally based on rules and norms.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 629
2. The interaction of agents with other agents and with the environment provides the input for imitation, learning, and creativity and the perturbations are related to the degree of cumulativeness and path dependency. 3. Typical agent interactions drive non-deterministic, open-ended irreversible processes in disequilibrium environments, which change the current states of the entities and result in Boolean responses. “Over the past decade, innovation in business practices — especially those leveraging information technology — have revolutionized the way we work in our offices; communicate with colleagues, family and friends, search for information; make purchases, a.s.o. This is perhaps more evident in the office where emails, Internet, word processing, spreadsheets and PowerPoint-driven presentation packages have become standard tools at all levels of the organization, dramatically decreasing needs for secretaries, letter heads, faxes, etc. — but not printed hardcopy. Fifteen years ago, it would have been very difficult to imagine what we take as everyday business practices now!” In the same review, Bob Miller, Mary Ann Malloy, Ed Mark, and Chris Wild (Miller et al., 2001) proposed a “framework for Managing information Systems.” They wrote the following phrases: “Whether military, commercial, or non-profit, enterprises must evolve and react to changes in these environments by leveraging available information to make effective decisions and thereby sustain their respective viability within the global environment (turbulent market). How information plays a central role in any enterprise? Enterprises do this in the context of their information environment, which include the information, processes (both human and automated), knowledge resources, and business operations employed by enterprise to manage information and make decisions.” The interesting paper (Miller et al., 2001) focuses one’s attention on Computer Aided/Integrated activities, such as “Data” (digital certainly) are the inputs of the systems. But, the DBES collects initially Information (facts) from the enterprise environment. Then the data are generated by coding in the DBES. The information environment management framework is organized in seven functional layers: 1. 2. 3. 4. 5. 6. 7.
Producers/consumers (users/applications) Decision management Knowledge management Information management Data management Dissemination management Communication management
Seven years before writing of their paper, the author provided an inter-system perspective. “The real advantage of defining an Information Environment Management Framework comes from the promise of Information Sharing and Interoperability of Collaborating Systems.”
March 15, 2010
630
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
Today, every commercial software package (e.g., WebSphere from IBM) even open-source software package offers Service-Oriented Architecture (SOA) providing registered services at middleware level. Finally, the Joint Battle-space Info-sphere (JBI) as an “Informant” has been developed for military purpose, but it could be transferred for civil “Network Enterprise” or Collaborative Networks Organization (Camarinha-Matos et al., 2008) (Fig. 12). An intelligent enterprise is a living organism, where all components and subsystems work coherently together to enable the enterprise to maximize its potential and competitiveness. It needs to integrate and exhibit Business Intelligence (BI) in order to adapt and survive in a continuously changing environment. An intelligent enterprise understands its internal structure and activities as well as external forces such as market, competition, technology, and customers. It learns and adapts continuously to the changing environment. Learning and adaptation are achieved through real-time monitoring of operations, customers, markets, gathering and analyzing data, creating and disseminating knowledge, and making intelligent decisions. Building an intelligent enterprise requires an intelligent foundation. Modern ICT combined with artificial intelligence research provide necessary tools to create and sustain intelligence in the organizational infrastructure. Recent advancements in distributed artificial intelligence, multiagent systems, and networking technology have laid the foundation for real-world deployment of intelligent agents. Intelligent agents are software applications that follow instructions given by their creators and which learn independently about the information they are programmed to gather. They save time and simplify the complex world of information management. One of the key concepts of intelligent agents is autonomous or intelligent behavior. An intelligent agent can anticipate the need for information, automate
Figure 12.
Customized joint battle infosphere.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 631
a complex process, take independent action to improve a process, or communicate with other agents to collaborate on tasks and share information. Intelligent agents share some other human-like characteristics. For example, they are context sensitive, capable of learning and adapting, goal driven, possesses specialized knowledge, and communicate with people or other agents. Intelligent agents are particularly effective in providing a reliable and flexible foundation for DBES through facilitating knowledge query and communications. In traditional systems, many of the functions and processes may not be autonomous. Furthermore, interoperability may be limited due to different platforms and standards used by various subsystems. Recent developments in standardsbased networking and data communications, such as transactions based on SOAP and XML Web services, promise drastic improvements in machine-to-machine communications. These developments benefit standards-based intelligent agents’ communications and the integration of intelligent agent systems with traditional systems. Others types of agents, such as Environment scanning, Knowledge acquisition, and Ontology creation, are proposed to support various organizational functions that are essential to intelligent enterprises. Intelligent agents work autonomously on behalf of their owners (and are controlled by their owners, if necessary). Various agents play different roles to provide a wide range of intermediary services such as controlling and monitoring system operations, retrieving and filtering information, and negotiating and interacting with other users and/or other intelligent agents. The end user does not need to know the design and the inner work of the intelligence infrastructure or the information and services available. Through interface agents, the user can query the system, issue commands, and request services. The behavior of interface agents can be customized to meet the user’s preferences. Intelligent interface agents can also learn and adapt to the styles of their users through observing the user’s actions. Finally, the user interacts with a personalized interface agent that knows how to satisfy his/her needs. Interface agents, in turn, interact and collaborate with other intelligent agents to accomplish user-requested tasks. Task-specific agents work in the distributed system either individually or collectively to obtain the information or services. Although today’s information and knowledge systems are mostly networked, crosssystem communication and sharing are typically limited to simple information retrieval and message exchanges. The integration of multiagent systems with existing information infrastructure makes it possible to develop distributed intelligence across the entire networked organization. The distributed intelligence can gain more and more importance, as Internetbased network standards (XML, SOAP, etc.) gain wider acceptance in the corporate world. Because of built-in intelligence, agents are capable of searching information more effectively than search engines. The intelligence infrastructure can be extended to mobile workers through mobile computers and mobile intelligent
March 15, 2010
632
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
agents. Mobile intelligent agents can migrate though computer networks to satisfy requests made by the user. Intelligent agents can monitor user behavior over a length of time and then customize the application interface that is tailored to the user’s needs. Intelligent agents are widely used in automated searching and retrieval of information based on users’queries. They help users to classify, sort, organize, and locate information from various sources such as the Internet, online databases, and government/corporate data warehouses. Collaborative filtering agents provide the user with information based on his profile and those of other users who share similar interests or activity patterns (Fig. 13). Intelligent agents can support communications and collaborations among team members. They also can support the cooperation between buyers and suppliers, and build a virtual market place to carry out electronic searching, negotiation, ordering and invoicing.
Figure 13. A knowledge-driven portal on top of a distributed data mining system.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 633
Shopping agents, known as shopbots, are designed to help the user to find the best bargain with minimum effort. Intelligent agents can have access to databases and analytical tools and provide decision support. Various artificial intelligence techniques can be implemented, including but not limited to statistical analysis, rule-based expert systems, casebased reasoning, heuristic search, fuzzy logic, neural networks, and evolutionary computing. Intelligent agents can provide individual, custom-tailored services, typically aimed at individual information organization and personal productivity. Intelligent agents can automatically monitor, allocate, coordinate, and manage network services over an intranet and/or the Internet. They can assist in network administration tasks like routing, access, and service provisions. Intelligent agents can use analytical tools to identify patterns, trends, and critical events in large amounts of data in databases or on the Web. They can also be used to cooperate with personal agents to extract useful information from databases. Organizational intelligence can be seen as a function of the number of connections, the integration of those connections, and system design. Computer networking and communications technologies have greatly enhanced the connectivity of organizations and changed the way they operate. Although substantial research has been conducted on the impact of information technology on organizations and the integration of various information systems, most studies have not considered a comprehensive, totally integrated infrastructure for intelligent organizations. Further intensive research needs to be considered for the next years. 5.1. Results and Analysis Including Managerial Implications Despite a long way in DBES research roadmap including “key milestones,” for our holistic approach for modeling and analysis, the chapter targets the managerial implications of our work. Taking into account the “challenging” managerial framework that has been provided by Zachman [www.zifa.com] since 1986, our team re-considers the “new line” within this “matrix-like” framework, such as a systemic approach for DBES to be completed. 6. Conclusions and Further Work 1. The present version of the DWT sketches the “object-oriented” Q++ language and provides implementation considerations. Q++ is similar in purpose and presentation to any other computer programming language (automation/formal languages/logic correspondence), except the gates process quantum digits.
March 15, 2010
634
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
2. The mathematical tools/technology needed to take into account quantum fluctuations, i.e., a dynamic hardware as opposed to a fixed hardware configuration is encapsulated into the QDR approach. Its mathematical core, the Homological FPI (Feynman processes on dg-coalgebra 2-categories) with its physical interface as Quantum Information Dynamics on Quantum Circuits (Non-commutative Hodge-de Rham-Dirac Theory on the QDR) provides an “all-quantized-no-renormalization-needed” model of quantum information flow: random teleportation at the speed of light, the theoretical basis for non-conventional transportation methods. 3. According to the DBES “Ladder of adoption of Internet technologies,” the “Future Internet” should consolidate the Web 2.0/ @Collaborative Research for DBES. 4. The work described in this paper represents the first phase in the development of computational efficient model predictive hybrid control systems (Stanescu and Szczeczin, 2008). 5. The hybrid dual methodology of DBES system (top-down/bottom-up) has been inspired from Petri nets advanced research technology provided by Zhov, Di Cezzare research to develop Flexible Manufacturing Systems. 6. The computational biology created a great challenge to conceptual design, like holonic systems (Koestler–Van Brassel, P. Valkernaers–Katolische Universitat Leuven 10 years advanced research or ants-like platoon of intelligent robots MIT Professor Brook’s achievement). The initial concerns and aims of this chapter are related with the DBES analysis methodology, but this Complex Adaptive System of Systems needs much research effort to fulfill next R&D “chapters” like: 1. DBES analysis (global and local performance evaluation) 2. DBES formal tools to prove the stability of such evolutive, self-organizing, complex systems 3. DBES synthesis methods and techniques 4. DBES-ICT-tools development and their integration e-collaborative platform 5. Data-mining tools for open-source orient software platform 6. New methodologies for compatibility, interoperability, and integrability 7. Formal support for Discrete-Event Dynamical System platform aiming at the Enterprise System Definition E : f [Mp , Mo , Ma , Mr , Mk , MA , G] where: Mp : a set of various technical and economical (manufacturing, design, business, management, a.s.o.) Mo : a set of “computational objects” Ma : a set of both artificial and human agents Mk : a set of both tacit and social knowledge
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 635
MA : a set of activities (integrated by flow-work management) G: a business goal that is based on decomposition into the tree of objective/activities/tasks 8. Ontology development for this DBES discipline having strong TRANSDISCIPLINARY domain features 9. Guide of best practices-oriented DBES pilots 10. Developing the scientific bridge between Ecological Economics (Common, and Stagl, 2005) and DBES cross domains 11. Traceability and Availability concepts (Panetto, 2007). References Camarinha-Matos, LM, AI Oliveira, R Ratti, D Demsar, F Baldo and T Jarimo (2007). A computer-assisted VO creation framework, PRO-VE 2007. In 8th IFIP Working Conference on Virtual Enterprises. Guimaraes. Camarinha-Matos, LM, H Afsarmanesh and M Ollus (2008). ECOLEAD and CNO base concepts. In Methods and Tools for Collaborative Networked Organizations, Camarinha-Matos, LM, H Afsarmanesh and M Ollus (eds.), Springer. Choi, BW and D Nagy (2006). Global Collaborative environments for manufacturing innovation. In Proceedings of the IMS vision forum 2006, 124–137. Choi, BW and D Nagy (eds.), Seoul, Korea. Common, M and S Stagl (2005). Ecological Economics, An Introduction. Cambridge. Gelfand, SI and I Yu (1999). Mannin, Homological Algebra. Springer. Georgescu, V (1996). A fuzzy generalization of principal components analysis and hierarchical clustering. In Proceedings of the Third Congress of SIGEF, Paper 2.25. Buenos Aires. Georgescu, V (2001). Multivariate fuzzy-termed data analysis: Issues and methods. Fuzzy Economic Review, VI(1), 19–48. Georgescu, V (2002). Chi-square-based vs. entropy-based mechanisms for building fuzzy discretizers, inducers and classifiers. Fuzzy Economic Review, VII(1), 3–28. Georgescu, V (2003). On the foundations of granular computing paradigm. Fuzzy Economic Review, VIII(2), 73–105. Georgescu, V (2004a). A generalization of symbolic data analysis allowing the processing of fuzzy granules. Lecture Notes in Artificial Intelligence, 3131, 215–226. Georgescu, V (2004b). Reconstructing configurations of fuzzy granules from trapezoidal fuzzy dissimilarities by a non-conventional multidimensional scaling method. In Decision & Simulation in Engineering and Management Science, Proceedings of ICMS’04-Spain, Palencia (ed.). 71–82, Espana. Georgescu, V (2007). Granular vs. point-wise metrics and statistics to accommodate machine learning algorithms in granular feature spaces. Fuzzy Economic Review, XII(2), 45–74. Horowich, GT (1987). String Theory Without a Background Spacetime Geometry — Mathematical Aspects of String Theory, In Proceedings of the Conference on Mathematical Aspects of String Theory (1986), S Yan (ed.), pp. 127–140. Ionescu, LM (2007). From Operads and PROPs to Feynman processes, http://arxiv1.library. cornell.edu/abs/math/0701299V1/.
March 15, 2010
636
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
Ionescu, LM (2003). Perturbative Quantum Field Theory and Configuration Integrals. help-th/0307062. Ionescu, LM (2004). Cohomology of Feynmann graphs and perturbative quantum field theory. Focus on Quantum Field Theory, Kovras, O (ed.). Nova Publishers Inc., ISBN: 1-59454-126-4. Ionescu, LM (2004a). Remarks on quantum theory and noncommutative geometry. International Journal of Pure and Applied Mathematics, 11(4). Ionescu, LM (2004b). Perturbative quantum field theory and L-algebras. In Advances in Topological Quantum Field Theory. Proceeding of the NATO ARW on New Techniques in Topological Quantum Field Theory, J Bryden (ed.), New York: Springer-Verlag. Ionescu, LM (2005). The search of a new unifying principle, http://www.virequest.com/ VIReQuest.UP.htm. Ionescu, LM (2006). The Digital World Theory, Vol. 1. An Invitation! Olimp Press. Ionescu, LM (2007). The Feynmann Legacy. http://arxiv.org/abs/math.QA/0701069/. Karagiannis, D and H Kuhn (2002). Metamodelling platforms. In Proceedings of the Third International Conference EC-Web 2002. Bauchnecht, K, A Min Tjoa and G Quirchmaier (eds.). Springer Verlag. Kontsevich, M (2003). Deformation, quantization of Poisson manifolds. Letters in Mathematical Physics, 66(3), 157–216. Li, M-S, R Cabral, G Doumeingts and K Popplewell (eds.) (2006). Enterprise Interoperability Research Roadmap, Final Version (v4.0), European Commission. Miller, B, MA Malloy, E Masek and C Wild (2001). Towards a framework for managing the information environment. Information, Knowledge, and Systems Management 2, 339–384. NCBI. The National Center for Biotechnology Information. www.ncbi.nih.gov. Panetto, H (2007). Towards a classification framework for interoperability of enterprise applications. International Journal of Computer Integrated Manufacturing, 20(8), 727–740. Pedrycz, W and A Bargiela (2002). Granular clustering: A granular signature of data. IEEE Transactions on Systems, Man and Cybernetic, 32(2), 212–224. Pedrycz, W (2005). Knowledge-Based Clustering. From Data to Information Granules. John Wiley & Sons. Rause, WB and RB Kenneth (2001). Impact of next generation concepts of military operations on human effectiveness. Information, Knowledge, and System Management Review, 2, 347–357. Schena, M, D Shalon, RW Davis and PO Brown (1995). Quantitative monitoring of gene-expression patterns with a complementary-DNA microarray. Science, 270(5235), 467–470. Stanescu, AM and I Dumitrache (2007). Collaborative network for complex adaptive systems. In CSCS Conference 2007, Mac Millan’s, Dictionary for Advanced Learners. Stanescu, AM, V Christea, AM Florea and A Curaj (2000). Cluster projects for promoting within concurrent enterprising paradigm. In Proceedings of the 6th ICE Conference. Toulouse. Stanescu, AM, I Dumitrache et al. (2006). Towards holistic approach for business informatics-oriented skills formation in the knowledge economy. In Leading the Web in Concurrent Engineering. Next Generation Concurrent Engineering, Ghodous, P, R Dieng-Kuntz and G Loureiro (eds.), 774–783. Antibes, France: IOS Press.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
Toward Digital Business EcoSystem Analysis 637
Stanescu, AM et al. (2008). From taxonomy towards ontology-based modelling framework in general system theory. In Paper Accepted for ICCC Conference, 15–17, Oradea, May 2008. Stanescu, AM, D Karagiannis, MA Moisescu, IS Sacal´a and V Manoiu (2008). Towards a holistic approach for intelligent manufacturing systems synthesis. 9th IFAC Workshop on Intelligent Manufacturing Systems. Szczecin. Zadeh, LA (1997). Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets and Systems, 90, 111–117. Zadeh, LA and J Kacprzyk (1999). Computing with words. Information/Intelligent Systems, Vols. 1–2, Heidelberg: Physica-Verlag. Noy, SY (2006). Collaborative e-work and e-manufacturing challenges for production and logistics managers. Journal of Intelligent Manufacturing, 17(6), 689–702.
Biographical Notes Aurelian Mihai Stanescu, 64 years-old, is a full professor, PhD students, supervisor, head of Department of Decide@L3/University Polithnica, Bucharest (Department of Distance Learning and e-Learning) and the head of Information Systems Laboratory with the Departments of Automatic Control and Information, Communication Technology and the Faculty of Automatic Control, Robotics, and Computer Science. He obtained MSc degree in Automation (Control Engineering, 1967), PhD degree in Advanced Automatic Control (1976) at University Polithnica, Bucharest. He spent 40 years at the same department at University Polithnica, Bucharest. Stanescu introduced new coursewares, like Control Engineering in Electric Drivers Automated Systems (1977), Mathematics, Hardware, Hardware and Software Systems for Industrial Robots (1982), Introduction in Computer Integrated Manufacturing Systems (1990), Discrete Events Dynamic Systems (1996), and Business Process Modeling, Monitoring, and Management Systems (2007). He published more than 100 review articles, monography studies, and textbooks. With more than 30 research projects in Romania, 7 R&D/IST European Projects, Stanescu is the IFAC TC 5.3 representative person of Romania, vice-president of Romanian Association of Concurrent Engineering member of SRAIT (Romanian Society of Automation and Information Technology), and member of the Romanian Association of Robotics and Mechatronic He is residing at 313 Splaiul Independentei Street, Postal Code 060042, Bucharest, Romania. Dr. Lucian Miti Ionescu is a mathematics professor at Illinois State University, with an established record of mathematical publications at international level. Recommendations from Saunders Mac Lane, encouragements from Yu. I. Manin, and repeated invitations to visit the prestigious Institute des Hautes Etudes Scientifique are only a few confirmations of a solid academic career, yet in the pursuit of a teenager’s dream: understanding gravity together with the other fundamental forces. The author is also the president of VIReQuEST, a dedicated interface between sponsors and a focused research group on a quantum entropy and space-time.
March 15, 2010
638
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch25
A. M. Stanescu et al.
Prof. Vasile Georgescu is a profesor in cybernetics and econometrics at the University of Craiova, Romania. His areas of research interest include Artificial Intelligence, Soft Computing, Data Mining and Knowledge Discovery, Econometrics, Cybernetics, and Multi-Agent Systems. He published 17 books and more than 150 scientific papers in international journals and international conference proceedings. He was invited to co-chair the Congress of International Association For Fuzzy-Set Management and Economy (SIGEF) in 2007, and to chair several sessions in International Conferences. He was the project director of a research grant in the area of Digital Business Ecosystems with the theme “Prototype of an Intelligent Portal for Knowledge Management in Digital Business Ecosystems, based on Inductive Knowledge Techniques,” founded by the Romanian Research Ministry within the Excellence Research Program. Mihnea Alexandru Moisescu is an assistant professor at University “Politehnica,” Bucharest with research interests in Collaborative Networks, Business Process Modeling, Information Systems, Intelligent Manufacturing Systems, Process Modeling for Extended Enterprise, Discrete Event Systems in Manufacturing, and Mobile Robotics. The coauthor has a Bachelor Degree in Automatic Control, a Bachelor Degree in Biophysics, a Master Degree in Biophysics and is currently a PhD student in Automatic Control Science. The research undertaken at the Information Systems Laboratory offered a solid base for a series of research papers, presented and published at prestigious International Conferences and for numerous research projects founded by national and international agencies. He is residing at 313 Splaiul Independentei Street, Postal Code 060042, Bucharest, Romania. Ioan Stefan Sacala is an assistant professor at University “Politehnica,” Bucharest with research interests in Collaborative Networks, Geographically Distributed Systems, Intelligent Manufacturing Systems, Process Modeling and Information Systems within the Extended Enterprise, Discrete Event Systems in Manufacturing, and Mobile Robotics. The coauthor has a Bachelor Degree in Automatic Control, a Bachelor Degree in Geography, and is currently a PhD student in Automatic Control Science. The research undertaken at the Information Systems Laboratory offered a solid base for a series of research papers, presented and published at prestigious International Conferences and for research grants founded by national and international agencies. He is residing at 313 Splaiul Independentei Street, Postal Code 060042, Bucharest, Romania. Liviu Badea, PhD, is a senior researcher at the National Institute for Research and Development in Informatics, Bucharest. His current research interests are in the fields of Bioinformatics, Artificial Intelligence, and the Semantic Web. He is the head of the AI and bioinformatics group. He is residing at 8-10 Averescu Blvd., Bucharest, Romania.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
Chapter 26
The Dynamics of the Informational Contents of Accounting Numbers AKINLOYE AKINDAYOMI Accounting and Finance Department, Charlton College of Business, University of Massachusetts, Dartmouth, USA [email protected]
Accounting numbers are undoubtedly the most important inputs in the financial reporting system. The systematic reporting of accounting information determines the value of such reporting system among the users of that information. In this study, I examine the value of accounting numbers in different areas of importance, relying on the premise that accounting numbers possess considerably rich informational content. Among others, I conclude that essential to the processing of accounting information is the disclosure of the environment and context, which are believed to have been substantially improved in the era of Sarbanes– Oxley Act. Keywords: Accounting numbers; earnings management; energy informativeness, disclosure environment.
1. Introduction Accounting numbers are undoubtedly the most important input sources in the financial reporting system. Accounting information as presented in corporate financial statements is used by professional and non-professional stakeholders to make a variety of decisions ranging from investment, financing, and corporate responsibility issues. Liang (2001) categorizes these needs as production needs, consumption/investment needs, and contracting needs. Ideally, the quality of the information required to meet each of these needs should reflect both the source(s) and the processing system(s) generating such information. The capital market hypothesis suggests that the market is efficient in its response to information coming into the market. However, many researchers are concerned about the validity of this assumption. I argue that the market might not be perfectly efficient, in part because information coming into the market in itself is not efficient or somewhat biased in a way that can mislead investors or other market participants. It is therefore important that the source of such information is by itself capable of enhancing the informational efficiency for users of accounting and other 639
March 15, 2010
640
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
A. Akindayomi
financial information. The role of accounting numbers in the capital market cannot be overemphasized. Bushman and Smith (2001), confirming the importance of accounting numbers, note that these numbers are directly input into the mechanisms of the financial reporting dynamics. The credibility and integrity of the sources of any information (e.g., accounting information) determine the value of such a reporting system among the users of that information. In the context of accounting information, it is imperative to understand the dynamics and complexities surrounding the creation of accounting numbers in a variety of settings. It is my objective in this study to examine the dynamics of the informational content of accounting numbers. I will do this in the context of earnings management, accounting information disclosure environment, players in the information environment, and their influence in shaping the financial reporting process. These players include financial analysts, the institutional shareholder group, the external auditor group, and corporate governance structures. The remaining part of this write-up is as follows. In the next section, I examine earnings management and how it affects accounting information. In Section 3, I look at the dynamics of the accounting information disclosure environments. The relevant players’ influences in the financial reporting process are the focus of Section 4 while Section 5 presents the summary and conclusion of the study. 2. Earnings Management and Accounting Information Earnings management is described in the accounting literature to mean the potential for managers to use the discretionary powers provided by the Generally Accepted Accounting Practice (GAAP) in a manner that is consistent with the managers’ financial reporting objectives (see, for example, Jones, 1991; Dechow et al. 1995; Kang and Sivaramakrishnan, 1995; Nicholas and Wahlen, 2004; among others). Bergstresser and Philippon (2006) examine CEO compensation and earnings management (see also Burns and Kedia, 2006). According to Schipper (1989), earnings management “occurs when managers use judgment in financial reporting and structuring transactions to alter financial reports to either mislead some shareholders about the underlying economic performance of the company or to influence contractual outcomes that depend on reported accounting numbers.” This definition is widely accepted in the accounting literature, in part because it inherently shows the importance of accounting numbers in the financial information process and how reporting managers could take the undue advantage of the flexibility offered by GAAP to cloud the real performance of the firm with managed performance. Research has shown that earnings management could be income increasing or income decreasing, the choice of which is determined by the financial reporting objectives of the manager (see, for example, Sloan, 1996; Dechow et al., 1996; Dechow and Skinner, 2000). This means that managers do not always manage upwards. If necessary (opportunistically), they can also manage earnings downward
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
Dynamics of the Informational Contents of Accounting Numbers 641
in order to depress income and/or other accounting numbers released to investors and other interested relevant stakeholders. Schipper further notes that managers can manage earnings directly by managing accruals, or indirectly by managing real activities (see also Graham et al., 2005; Roychowdhury, 2006). For accruals management, either of these approaches comes with costs and benefits to the information environment. From the perspective of the manager, managing accruals is considered beneficial in that it is often difficult to detect and flexible to manipulate. On the other hand, the cost is that managed accruals will reverse in the short-term future and sometimes with full reversal, especially when there are no such accruals to manipulate. By then, the true performance will be in the open. Incidentally, the benefits opportunistically obtained from that process might be difficult or impossible to reverse. The benefit obtainable by managing real activities is that it is difficult to detect. On the downside, however, it could potentially result in huge future erosion of the firm’s value. Either choice (accruals or manipulation of real activities) will benefit the managers’ opportunistic interests, at least in the short term, at the expense of other stakeholders of the company, notably the shareholder group. Studies have found different uses of earnings management. For example, Watts and Zimmerman (1986) suggest that earnings management could be used to reduce political costs, while Schipper (1989) finds that earnings management could be used to signal managers’ private information (see also Healy and Palepu, 1995) and obtains cheaper external financing (Dechow et al., 1996). Sloan (1996) believes that managers manage earnings to enhance stock prices (see also Collins and Hribar, 2000; Xie, 2001). LaFond and Watts (2008) referencing Jensen and Mecklings (1976) and Watts and Zimmerman (1986) regard earnings management as a financial reporting strategy that “uses up resources” to the benefit of managers or at the expense of corporate owners. Undoubtedly, the accounting numbers resulting from this system are biased by the process. Interestingly, such bias could be positive or negative but hardly neutral. Scholars who support a positive bias from earnings management process argue that such managed accounting numbers reveal future information about the intentions of managers regarding the expected financial status of the firm. Therefore, the resulting accounting numbers are informative as opposed to opportunistic. For example, the accounting recognition concept is central to earnings management (Liang, 2001). Accelerating revenue recognition now could imply that the firm has secured orders/contracts and that these orders/contracts will produce future cash inflows, which could be used to pursue positive net present value projects and thus increase the future value of the firm. While such a situation will increase future cash flows, skeptics of earnings management argue that it will hurt profitability when the accounting dynamics reverse, having overstated profitability earlier. This arguably tends to compromise the quality of accounting information. Also, it will require a certain degree of sophistication on the part of investors or users of financial statements to understand the positive value of accounting numbers that are products of the earnings management process.
March 15, 2010
642
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
A. Akindayomi
An evidence of such a compromise in the quality of earnings resulting from earnings management can be found in the findings of Sloan (1996). He argues that investors tend to “fixate” on profit (i.e., reported earnings) thereby making them inclined to misprice the accrual components of earnings versus the cash flow components. This investors’ behavior appears counterintuitive in that it contradicts the evidence that accruals are less persistent than the cash flow components in predicting future corporate earnings. Khan (2008) refers to the accrual mispricing by investors as troubling. Khan examines the risk dimension to the “accrual anomaly” and suggests that some macroeconomic variables might explain part of the anomaly. Further, skeptics argue, and rightfully so, that many managers employ earnings management to conceal rather than reveal their true intentions through aggressive earnings management actions. Such managers achieve such concealment by not just managing the earnings but also managing the dynamics of the disclosure process of the accounting information. This will be explored further in the next section. Graham et al. (2005) provide evidence that managers are more interested in managing earnings or specific accounting numbers through real activities than through accruals. Roychowdhury (2006) provides corroborating evidence that managers manipulate the financial reporting process by manipulating real (operational) activities with such decisions as granting price discounts, production, and discretionary expenditure. His study reveals that managers grant price discounts to increase sales in the short term. Similarly, the firm can embark on overproduction to lower the cost of goods sold, and even reduce discretionary expenditure (e.g., maintenance expenditure or employee training) to “improve reported margins” (see also Dechow and Skinner, 2000; Thomas and Zhang, 2002; Hribar, 2002). All these real-activities manipulating strategies are used upon to avoid “reporting annual losses” (Roychowdury, 2006) or to increase reported annual earnings. In fact, he suggests that managers do not rely only on accrual management or real activities manipulation but a mix of both. While the conventional styles of earnings management do not have a direct impact on the firms’ cash flow, manipulating real activities do have impact cash flow and even accruals. However, both means of interfering in the financial reporting process can at best help managers to meet “short-run earnings targets” with no capacity for sustenance in the long run. It must be mentioned that manipulating real activities do impose certain longterm costs on the company even though short-term selfish financial goals might be achievable. Such costs, according to Roychowdhury (2006), could include inventory carrying cost, customer expectation costs, or increased future maintenance costs. This, in part, makes many researchers believe that managers’ intervention in the financial reporting process is mainly to “mislead” specific category investors or the market participants into believing that certain financial reporting goals have been met or are achievable. Specifically, manipulating real activities can enable managers to opportunistically meet selfish goals, but usually not to increase or add to firm value.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
Dynamics of the Informational Contents of Accounting Numbers 643
3. Dynamics of Accounting Information Disclosure Environment Diamond (1985) has since suggested that managers should provide full public disclosure of relevant information to investors to prevent or (at least) reduce the incidence of “private acquisition of public information” for unnecessary arbitrage purposes. The primary need for full disclosure is necessitated by the growing information gap between the firms (i.e., managers) and the owners of the firm (i.e., investors). This is referred to in the accounting disclosure literature as information asymmetry. Extant theory suggests that information asymmetry is real and that to bridge the information gap between managers and investors/stakeholders, and even within the investors/stakeholder group, managers issue earnings forecasts, and for these forecasts to be effective, it must contain information that are considered new to the market and capable of positively impacting firm value. Researchers develop a tool for measuring the informativeness of earnings forecasts. This is called Earnings Response Coefficients (ERC) (for more details, see Lennox and Park, 2006). Interestingly, research has shown that earnings forecasts in a way reduce information asymmetry, although the degree of effectiveness varies among market participants on one hand, and whether the forecasts contain good news, bad news, neutral news (Waymire, 1985) or hand earnings surprise, on the other. Lennox and Park (2006) suggest another objective that managers might like to use earnings forecasts to achieve. They claim that managers sometimes issue earnings forecasts to reduce uncertainty especially when their firms’ earnings are volatile over time (earnings volatility). Notwithstanding the effectiveness of earnings forecasts at reducing information asymmetry, managers still manipulate the dynamics of forecasts, in the form of contents, timing and type, to effectively achieve the intended purpose. I must mention that the “manipulation” referred to in this context does not necessarily imply negative actions on the part of the manager if his/her original intention regarding the decision to issue earnings forecasts is positive, i.e., to reduce information asymmetry. Nonetheless, the effectiveness of management earnings forecasts largely depends on the credibility (in the disclosure environment) of the firm issuing the information and the expectations or beliefs of investors/market on the value-relevance of the information contained in such forecasts. One area that managers might want to influence in the disclosure dynamics of the financial reporting process is the area of intangible assets. Hall (1996) acknowledges the growing importance of intangibles and suggests that intangibles are becoming a substantial component of corporate earnings and growth. As a result of the increasing prominence of intangibles in the assets portfolio of firms, Gu and Lev (2004), while examining the significance of royalty income, note the benefits of trading in these assets but quickly admit that such a trade prospect is hindered by the existence of information asymmetry between potential parties to the trade. This then implies that buyers and sellers of intangibles require, among
March 15, 2010
644
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
A. Akindayomi
others, a system of information processing capabilities to bridge the information gap that can reduce the asymmetry and improves the quality of trade to the relevant parties. It must be mentioned that many firms still withhold the full disclosure when reporting royalty income. Such an action can only further deepen the information asymmetry gap, which will influence the quality of the information product. A likely solution might be developing a system for gathering and processing such vital data inputs outside the regular reporting media (i.e., the financial statements and other periodic corporate reporting documents). As investors rely on a variety of sources to obtain the information needed for decision-making purposes, researchers have to examine the value of industry-wide and firm-specific sources of information to investors to determine their valuation implications. Elgers et al. (2008) examine Ayers and Freeman’s (1997) findings that the market (investor) expects to get industry-wide earnings components, before firm-specific earnings components even though the latter is responsible for the post-earnings announcement drifts. Elgers et al., however, cautioned that Ayers and Freeman’s evidence could have been driven by measurement errors and inadequate control variables in their model specification. The significance of Elgers et al’s., countering but corroborating evidence is that investors might actually anticipate firm-specific earnings components earlier than industry-specific ones thus explaining why the former shapes the post-earnings announcement drift. This dynamic could influence the disclosure strategy of firms, as it relates to firm-specific private information. Realizing that investors have the option of investing in either risky assets or risk-free assets, Suijs (2007) suggests that managers may withhold corporate private information if they are unsure of investors’ reactions to such information, thereby making them disclose “average information”a to manage investors’ investment decision making to their firms advantage. Daniel et al. (2008) investigate whether firms manage earnings to not miss the expected dividend threshold, as a “dividend cut” may incur penalty from the market and even bondholders (see also Naveen et al., 2008). They find evidence consistent with the fact that managers “treat expected dividend levels as an important earnings threshold.” Usually, bondholders impose dividend restrictions (known as dividend covenants) mainly to constrain the ability of managers to reward stockholders at the expense of creditors. Bradley and Roberts (2004) note that over four in five debt agreements contain dividend restrictions. As accounting earnings are usually used in the covenant process, managers have the incentives to manage the reported earnings to avoid covenant violations. Such incentives are usually concealed by managers in the way they manage the corporate disclosure dynamics. A common phenomenon in the financial reporting environment is the restatement of earnings or accounting numbers by managers. The frequency of this act a Suijs claims that firms “withhold bad and good information, but instead disclose average
information.”
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
Dynamics of the Informational Contents of Accounting Numbers 645
has always been a cause of concern to regulatory officials and market analysts, given the fact that researchers widely find that the information content of earnings decline following restatements. For example, earnings restatement usually precedes many corporate scandals/failures of monumental magnitude in the past, such as WorldCom, Tyco, Global Crossing, Enron, among others. Wilson (2008) examines this effect and found that such decline is “temporary”, maintaining that the longterm anxiety following such restatement is “unwarranted.” He claims that even the “market response” to this is “transitory.” Accounting conservatism plays a role in the quality of accounting/earnings information produced by managers. Information asymmetry exists between managers and the investors/market because the former possess private/insider information and is capable of benefiting from such a privileged position. Therefore, one would expect that accounting information provided by corporate managers under such circumstances would be biased or at least seem biased to the market. Although there is ample evidences that accounting conservatism exists in the United States (see Basu, 1997; Holthausen and Watts, 2001, among others), its impact on the quality of accounting earnings remains controversial. While some believe that accounting conservatism reduces information asymmetry and by extension, increase the quality of accounting information (see Ball et al., 2000), others argue that it could actually increase information asymmetry, thus impairing the quality of accounting information. For example, the Financial Accounting Standard Board (FASB) has repeatedly opposed accounting conservatism, suggesting that it conflicts with neutrality (see, for example, FASB, 2005), an important characteristic of accounting information. LaFond and Watts (2008) provide evidence inconsistent with the FASB proposition and anxiety over conservatism. They assert that accounting conservatism “increases in response to increase in information asymmetry . . . ,” and that it is a “governance mechanism” that could effectively moderate management actions. These assertions suggest that accounting conservatism can actually increase the quality of accounting information. 4. Groups that Influence Corporate Financial Reporting and the Information Environment 4.1. Corporate Governance and Earnings Quality Ideally, one of the roles of sound corporate governance should be to curtail managers’propensity for opportunistic behavior in the financial reporting process. However, the ability of a firm’s governance structure to perform this role effectively depends on many factors including board composition, ownership concentration versus spread, and independence among others. For example, when the corporate board is dominated by insiders, the degree of oversight on the financial reporting process might be low relative to a circumstance where the board has substantial external representation. In fact, Klein (2002) demonstrates that the incidence of
March 15, 2010
646
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
A. Akindayomi
earnings management increases as the insider membership of the board or audit committee increases. Research shows that firms with corrupt practices usually have poor corporate governance (Beasley, 1996; Dechow et al., 1996). Wu (2004) also finds that the likelihood of a non-performing or corrupt CEO dismissal increases in a sound corporate governance environment. Farber (2005) investigates the impact of corporate governance on the financial reporting process. He finds that while “fraud firms” have weak corporate governance, they take actions to improve the corporate governance characteristics of their firms as part of the “recovery strategy,” and that the market rewards such improvements in corporate governance. It is no wonder then that regulators are increasingly demanding improved corporate governance. As mentioned earlier, “fraud firms” have questionable and highly biased accounting information, partly because of the manipulation of the financial reporting process. Understanding the impact of quality corporate governance on effectively improving the financial reporting process will enhance our understanding of the dynamics of accounting information. It should be mentioned that manipulation of the financial reporting process entails some risks, which impact the planning, and pricing decisions of auditors among others. Bedard and Johnstone (2004) examine how corporate governance risk interacts with earnings manipulation risk to influence auditors audit engagement. They conclude that managers’ interference in the financial reporting process makes audit costly, which could impact the firms cost of capital. In other words, improved earnings quality is more likely to reduce the firm’s cost of capital. To emphasize the importance of auditors in the financial reporting process, Nelson et al. (2002) submit that managers’ actual interference in the financial process is less prevalent than attempts (or willingness) to interfere when one considers auditors’ role as moderating agents (see also Nelson et al., 2003). Leuz et al. (2003), in their cross country study, provide empirical evidence that the quality of the financial reporting process increases the quality of legal protection for outside investors. 4.2. Financial Analysts There are different participants that aid investors’ informativeness of corporate information released either by the company itself or by other competitive or complementary sources (mainly the financial analysts) that are monitoring/following the company’s financial dynamics. Lang and Lundholm (1996) refer to the activities of the latter group as the analyst’s information intermediary role. Frankel et al. (2006) warn that the role played by analysts in this context does not necessarily provide incremental or value-relevant inputs for investors’ use in making informed investment or other decisions. Still on the value of analysts’s information, some researchers believe that analysts often compromise their integrity and are sometimes very optimistic in their stock price recommendations, thus contributing to the informational inefficiency of
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
Dynamics of the Informational Contents of Accounting Numbers 647
the capital market (for more details, see Dechow et al., 2000; O’Brien et al., 2005; Bradshaw et al., 2006). Investors’ reactions to analysts’ reports/information ought to depend on the timeliness of such information and on the quality of other information simultaneously available in the market (see, for example, Holthausen and Verrecchia, 1988). However, Francis et al. (2002) and Frankel et al. (2006) believe that such reactions are unaffected by other information sources in the market, claiming that analysts’ information is informative, and that their inputs into the market are essential to the smooth informational running of the capital market. They argue that analysts are not merely servicing the brokerage firms that employ them, but also service the market as a whole either directly or indirectly. One interpretation of this competing research findings could be that investors and other users of this information are sophisticated in their ability to identify their specific informational needs, and so are not distracted by several and often competing information in the marketplace. 4.3. Institutional Investors and Block-Holders In addition to the analysts group, the role of institutional investors in driving manager’s reporting behaviors has continually received attention from researchers. Generally, institutional investors are classified as block-holders of a firm who own substantial percentage (say 5% or more) of the firm’s stockholdings. Some of these block-stockholders are transient while others are dedicated (Bushee, 1998, 2001). Research shows that the former group is more interested in the short-term fortune of the firm (see, for example, Demirag, 1998) while the latter group is dedicated to the long-term survival of the firm (see, for example, Del Guercio and Hawkins, 1999). Therefore, they have differing financial reporting objectives. It is therefore essential to review how these different groups’ interests impact the managers’ financial reporting behaviors and their (managers) capacity to manage earnings or manipulate real activities. Not all outside block-holders of a firm’s stockholdings are directly represented in the firm’s governing board. Notwithstanding, research shows that this group of shareholders still possesses tremendous influences in the earnings management process. There are competing perspectives on the impact of this group of stockholders on managers’behavior to manipulate earnings. For example, some viewpoints claim that outside block-stockholders have greater incentives relative to small shareholders to monitor managers’ financial reporting behaviors and in the process, constrain the managers’ ability or behavior to manage earnings or manipulate real activities (see, for example, Del Guercio and Hawkins, 1999). I label this as a constraining role. On the other hand, the other viewpoint believes that the outside blockstockholders have opportunistic financial reporting objectives (see, for example, Rajgopal et al., 2002; Shang, 2003) consistent with those of reporting managers and that instead of performing the constraining role, they actively engage in the supporting role. This means that they encourage managers to bias the information
March 15, 2010
648
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
A. Akindayomi
environment by reporting favorable but opportunistic financial performance. In sum, the former viewpoint will result in lesser earnings management potential of managers, and the latter increases the incidence of earnings management. 4.4. External Auditors Another category of influencers in the financial reporting process is the (external) auditors’ group. The primary role of an auditor is to express an independent opinion on the truth and fairness of the financial statement prepared by managers before they are released to the information environment for investors’ and other stakeholders’ consumptions. Essentially, an auditor lends credibility to the financial statement. Ideally, it is appropriate to expect that with auditors as financial reporting watchdogs, managers should not be able to opportunistically manage earnings. The reality is that while auditors could constrain such opportunistic behavior, they cannot eliminate it. A higher audit quality translates into sustainable earnings, and the more sustainable earnings are, the less opportunistic earnings management exists. Research has shown that the size and reputations of the auditor determine the quality level of audit services that are provided to client firms. For example, Balsam et al. (2003) asserts that large auditors are more associated with higher audit quality then the small ones, and so is the credibility accorded to the financial reporting process. In fact, Francis et al. (1999) claim that managers of firms with high level of accrual do have motivations to employ the services of Big 5 auditor to lend credibility or assurance for their firms’ earnings numbers. 5. Conclusions The importance of financial information in the capital market cannot be overemphasized. While it is generally said that stocks are the main commodity in the stock market, I will argue that investors and other market participants mainly trade on information (mostly of financial dimension) in the stock market. Therefore, given the central role that financial information plays in the stock market, understanding its dynamics is sine qua non to effective decision-making in the capital market. In this study, I am able to emphasize and strengthen the need for quality and reliable information in the capital market. I specifically focus on the quality of accounting information coming into the market and the role of various players in the financial reporting process. I underscore the need for a sound corporate governance structure to check reporting managers’ excesses mostly aimed at opportunistically gaming the market through managing earnings (from accounting numbers) or managing real activities. The role of auditors and institutional investors is also examined. In sum, I submit that the capital market will be more efficient if the information coming into the market is efficient and that the dynamics of the financial reporting process is at
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
Dynamics of the Informational Contents of Accounting Numbers 649
best neutral. The accounting academic community hopes that the financial reporting environment and process will substantially improve post-Sarbanes–Oxley Act.
References Ayers, B and R Freeman (1997). Market assessment of industry and firm earnings information. Journal of Accounting and Economics, 24(2), 205–218. Ball, R, SP Kothari and A Robin (2000). The effect of international institutional factors on properties of accounting earnings. Journal of Accounting and Economics, 29(1), 1–52. Balsam, S, J Krishman and JS Yang (2003). Auditor industry specialization and earnings quality. Auditing: A Journal of Practice and Theory, 22(2), 71–79. Basu, S (1997). The conservatism principle and the asymmetric timelines of earnings. Journal of Accounting and Economics, 24(1), 3–37. Beasley, MS (1996). An empirical analysis of the relation between the board of director composition and financial statement fraud. The Accounting Review, 71(4), 443–465. Bedard, JC and KM Johnstone (2004). Earnings manipulation risk, corporate governance risk, and auditors’ planning and pricing decisions. The Accounting Review, 79(2), 277– 304. Bergstresser, D and T Phillipon (2006). CEO incentives and earnings management: Evidence from the 1990s. Journal of Financial Economics, 80(3), 511–529. Bradley, M and M Roberts (2004). The structure and pricing of bond covenants. Working Paper, Duke University and University of Pennsylvania. Bradshaw, MT, SA Richardson and R Sloan (2006). The relation between corporate financing activities, analysts forecasts and stock returns. Journal of Accounting and Economics, 42(1–2), 53–85. Burns, N and S Kedia (2006). The impact of performance-based compensation on misreporting. Journal of Financial Economics, 79(1), 35–67. Bushee, B (1998). The influence of institutional investors on myopic R&D investment behavior. The Accounting Review, 73(3), 305–333. Bushee, B (2001). Do institutional investors prefer near-term earnings over the long-run value? Contemporary Accounting Research, 18(2), 207–246. Bushman, R and A Smith (2001). Financial accounting information and corporate governance. Journal of Accounting and Economics, 32(1–3), 237–333. Collins, DW and P Hribar (2000). Earnings-based and accrual-based market anomalies: One effect or two? Journal of Accounting and Economics, 29(1), 101–123. Daniel, ND, D Denis and L Naveen (2008). Do firms manage earnings to meet dividend thresholds? Journal of Accounting and Finance, 45(1), 2–26. Dechow, PM, R Sloan and AP Sweeney (1995). Detecting earnings management. The Accounting Review, 70(2), 193–225. Dechow, PM, RG Sloan and AP Sweeney (1996). Causes and consequences of earnings manipulation: An analysis of firms subject to enforcement actions by the security and exchange commission. Contemporary Accounting Research, 13(1), 1–36. Dechow, PM and DJ Skinner (2000). Earnings management: Reconciling the views of accounting academics, practitioners and regulators” Accounting Horizons, 14(2), 235–250.
March 15, 2010
650
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
A. Akindayomi
Del Guercio, D and J Hawkins (1999). The motivation and impact of pension fund activism. Journal of Financial Economics, 52(3), 293–340. Demirag, IS (1998). Corporate governance, accountability, and pressure to reform: An international study. Stamford, CT: JAI Press. Diamond, D (1985). Optimal release of information by firms. Journal of Finance, 40(4), 1071–1091. Elgers, PT, SL Porter and LE Xu (2008). The timing of industry and firm earnings information in security prices. Journal of Accounting and Economics, 45(1), 78–93. Farber, DB (2005). Restoring trust after fraud: Does corporate governance matter? The Accounting Review, 80(2), 539–561. Financial Accounting Standard Board (FASB) (2005). Conceptual framework. Board Meeting Handout: July 27. Norwalk, CT: FASB. Francis, J, E Maydew and H Sparks (1999). The role of big 6 auditors in the credible reporting of accruals. Auditing: A Journal of Practice and Theory, 18(2), 17–34. Francis, J, K Schipper and L Vincent (2002). Earnings announcements and competing information. Journal of Accounting and Economics, 33(3), 313–342. Frankel, R, SP Kothari and J Weber (2006). Determinants of the informativeness of analyst research. Journal of Accounting and Economics, 41(1–2), 29–54. Graham, J, C Harvey and S Rajgopal (2005). The economic implications of corporate financial reporting. Journal of Accounting and Economics, 40(1–3), 3–73. Gu, F and B Lev (2004). The information content of royalty income. Accounting Horizon, 18(1), 1–12. Hall, B (1996). The private and social returns to research and development in technology, R&D and the economy. In Brookings Institution Press, Smith, B and C Barfield (eds.), pp. 140–183. Washington DC: Brookings Institution and the American Enterprise Institute. Healy, P and K Palepu (1995). The challenges of investor communication. The case of CUC International, Inc. Journal of Financial Economics, 38(2), 111–140. Holthausen, R and R Verrecchia (1988). The effect of sequential information releases on the variance of price changes in an intertemporal multi-asset market. Journal of Accounting Research, 26(1), 82–106. Holthausen, RW and RL Watts (2001). The relevance of value-relevance literature for financial accounting standard setting. Journal of Accounting and Economics, 31(1–3), 3–75. Hribar, P (2002). Discussion of inventory changes and future returns. Review of Accounting Studies, 7(2–3), 189–193. Jensen, M and WH Meckling (1976). Theory of the firm: Managerial behavior, agency costs and structure. Journal of Financial Economics, 3(4), 305–360. Jones, J (1991). Earnings management during the import relief investigations. Journal of Accouting Research, 29(2), 193–228. Kang, S and K Sivaramakrishnan (1995). Issues in testing earnings management and an instrumental variable approach. Journal of Accounting Research, 33(2), 353–367. Khan, M (2008). Are accruals misprices? Evidence from tests of an intertemporal capital assets pricing model. Journal of Accounting and Economics, 45(1), 55–77. Klein, A (2002). Audit committee, board of director characteristics and earnings management. Journal of Accounting and Economics, 33(3), 375–400.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
Dynamics of the Informational Contents of Accounting Numbers 651
LaFond, R and RL Watts (2008). The information role of conservatism. The Accounting Review, 83(2), 447–478. Lang, M and R Lundholm (1996). Corporate disclosure policy and analysts behavior. The Accounting Review, 71(4), 467–492. Lennox, CS and CW Park (2006). The informativeness of earnings and management’s issuance of earnings forecasts. Journal of Accounting and Economics, 42(3), 439–458. Leuz, C, D Nanda and P Wysocki (2003). Earnings management and investor protection: An international comparison. Journal of Financial Economics, 69(3), 505–527. Liang, PJ (2001). Recognition: An information content perspective. Accounting Horizon, 15(3), 223–242. Nelson, MW, JA Elliott and RL Tarpley (2002). Evidence from auditors about managers and auditors’ earnings management decisions. The Accounting Review, 77(Suppl.), 175–202. Nelson, MW, JA Elliott and RL Tarpley (2003). How are earnings managed? Examples from auditors. Accounting Horizon, 17(Suppl.), 17–35. Nichols, DC and JM Wahlen (2004). How do earnings numbers relate to stock returns? A review of classic accounting research with updated evidence. Accounting Horizons, 18(4), 263–287. O’Brien, PC, MF McNichols and H Lin (2005). Analyst impartiality and investment banking relationships. Journal of Accounting Research, 43(4), 623–650. Rajgopal, S, J Jiambalvo and M Venkatachalam (2002). Institutional ownership and the extent to which stock prices reflect future earnings. Contemporary Accounting Research, 19(1), 117–136. Roychowdhury, S (2006). Earnings management through real activities manipulation. Journal of Accounting and Economics, 42(3), 335–370. Schipper, K (1989). Commentary on earnings management. Accounting Horizon, 3(4), 91–102. Shang, A (2003). Earnings Management and Institutional Ownership. Unpublished Working Paper, Harvard University. Sloan, RG (1996). Do stock prices fully reflect information in accruals and cash flows about future earnings? The Accounting Review, 71(3), 289–315. Suijs, J (2007). Voluntary disclosure of information when firms are uncertain of investor response. Journal of Accounting and Economics, 43(2–3), 391–410. Thomas, JK and H Zhang (2002). Inventory changes and future returns. Review of Accounting Studies, 7, 163–187. Watts, RL and JL Zimmerman (1986). Positive Accounting Theory. Englewood Cliffs, NJ: Prentice Hall Inc. Waymire, G (1985). Earnings volatility and voluntary management forecast disclosure. Journal of Accounting Research, 23(1), 268–295. Wilson, WM (2008). An empirical analysis of the decline in the information content of earnings following restatements. The Accounting Review, 83(2), 519–548. Wu, Y (2004). The impact of public opinion on board structure changes, director career progression and CEO turnover: Evidence from CalPERS’ corporate governance program. Journal of Corporate Finance, 10(1), 199–227. Xie, H (2001). The mispricing of abnormal accruals. The Accounting Review, 76(1), 357–373.
March 15, 2010
652
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch26
A. Akindayomi
Biographical Note Akinloye Akindayomi is an Assistant Professor of Accounting in the Department of Accounting and Finance at the Charlton College of Business, University of Massachusetts Dartmouth, USA. He teaches Taxation, Financial Accounting, and InternationalAccounting both at the graduate and the undergraduate level at his University. He holds a PhD specializing in Accounting from the University of Calgary, Canada. His area of research interests include taxation, international accounting, executive compensation, corporate governance, among others. Some of his publications have appeared and are forthcoming in peer reviewed academic journals in the diverse areas of Accounting and Business as well as in professional journals.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Part V Information Systems in Supply Chain Management
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Chapter 27
Supply Chain Enabling Technologies: Management Challenges and Opportunities DAMIEN POWER Department of Management & Marketing, The University of Melbourne, Parkville VIC 3010, Australia [email protected]
As technologies become more open and easily applied, they will be used extensively as they promise competitive advantage through more efficient and effective management of supply chain processes. At the same time, however, they represent a potential trap for organizations thinking that they will supplant the need for effective strategy formulation, appropriate management of resources, or effective change and knowledge management systems. The management of the supply chain will be facilitated by more sophisticated technologies, but the organizations that are likely to benefit most will still be those able to choose, implement, and manage technologies appropriate to the requirements of their trading partner networks. A set of propositions and a framework explaining the nature of these relationships is presented. Keywords: Supply chain; technology; strategy; knowledge; resources.
1. Introduction The efficient and effective management of the supply chain has become non-negotiable for firms wishing to remain competitive in global markets. Whilst firms have long been faced with the problem of trying to maximize the efficiency and effectiveness of internal operations, it is increasingly apparent that local performance has not entirely been within their scope of control. Reliability and quality of supply has long been subject to potential problems for organizations competing on the basis of quality, cost, and/or delivery. In terms of demand, the accuracy and timeliness of information providing knowledge of real demand patterns has also historically been a problem to both access and process. Although this has led to improvements in both qualitative and quantitative forecasting methods, these methods are based on the assumption that the future will be like the past and are essentially sources of error. Unfortunately, this is less likely to be the case for many firms as they find their business environment changes rapidly and competitive dynamics do not remain static. Significant changes in technology, demographics, globalization
655
March 15, 2010
656
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
D. Power
of both markets and industries, and the potential for offering customized products at lower cost have highlighted the need to focus on the performance of supply chains rather than the firm. Managers have become increasingly aware of the fact that the needs of all stakeholders can be better served through this broader, more strategic approach. Whilst the management of the supply chain has become a competitive imperative for many organizations, new technologies offer an opportunity for organizations to leverage the interfaces and inter-dependencies created through trading relationships. The emergence of the Internet, Web services, service-oriented architectures, and associated technologies offer implementation options at a lower total cost of ownership. These advances further serve to facilitate the extended application of product numbering, Radio Frequency Identification (RFID), and bar-coding, along with methodologies used in supply chain management including quick response, vendor-managed inventories, cross-docking and collaborative planning forecasting and replenishment. These new technologies are also facilitating the development of seamless information and knowledge processes among trading partners. A “brave new world” of information technology-driven change in business processes and relationships offers significant opportunities for firms with the will and capability to access it. There is evidence, however, suggesting that this vision is some way from being grounded in practice. The idea that “integration” of physical flows of goods, information technology, information flows, and collaboration between firms can provide a connected, cohesive, coherent, and competitive supply chain is one that has substantial intuitive appeal. The reality has been found to be at best difficult, and at worst fraught with many pitfalls and problems. It does appear, however, that firms able to achieve a significant amount of integration also appear to report significant business benefits. In fact, the evidence exists to suggest that firms that have been able to achieve the highest levels of integration have developed a core competence built around the coordination and management of the supply chain. In fact, for those organizations able to do this, the management of the supply chain has become the core of their competitive strategy. Understanding the role that information technology plays in providing such an advantage is therefore pivotal to understanding how firms can best approach difficult investment decisions in this area. Of critical importance is having a clear focus on not only just what the technologies may offer, but also being aware of the organizational implications of technology choice and implementation. In this context, this chapter seeks to highlight the dilemmas and opportunities facing managers when making such choices in a supply chain context. A list of propositions for management practice is proposed, along with a model providing a framework for more effective decision making. In particular, the importance of the interface between strategic intent, knowledge management, and resource management are shown to be important pre-requisites of effective technology implementation.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Supply Chain Enabling Technologies 657
2. The Importance of Information Flows Effective application of information technology to the integration of supply chain activities has the effect of reducing levels of complexity, particularly through enabling more effective transfer of knowledge (Cha, 2008). Senge (1990) defines two types of complexity, detail and dynamic. Detail complexity exists when there are many variables needing to be managed. Dynamic complexity exists where cause and effect are separated, and difficult to associate, in both time and space: . . . situations where cause and effect are subtle, and effects over time of interventions are not obvious. Conventional forecasting, planning and analysis methods are not equipped to deal with dynamic complexity. (Senge, 1990, p. 71).
The “bullwhip effect” is an example of a typical supply chain management outcome resulting from circumstances that are dynamically complex, and was first highlighted by Forrester (1958, 1961). Chen et al. have defined this effect thus: This phenomenon states that the demand process seen by a given stage of a supply chain becomes more variable as we move up the supply chain (i.e., as one moves away from customer demand). In other words, the orders seen by the upstream stages of a supply chain are more variable than the orders seen by the downstream stages. (Chen et al., 2000, p. 269).
Symptomatic of this effect are excessive inventories, low customer service levels, inaccurate and untimely capacity planning, lost income, increased transportation costs, and ineffective production scheduling (Barua and Lee, 1997). Lee et al. also state that access to, and management of, information is critical to minimizing this type of variation: Innovative companies in different industries have found that they can control the bullwhip effect and improve their supply chain performance by coordinating information and planning along the supply chain. (Lee et al., 1997).
Attribution of causes of the bullwhip effect has varied since it was first observed. Forrester would say that the behavior in the system is a function of the interaction of structure (“effective organization structure and information sources”), delays (time between cause and effect/decision and implementation, etc.), and amplification (the inherent effects of policies) (Forrester, 1961, p. 348). Sterman (1989) sees the main influence as irrational human behavior driven by a misunderstanding of real demand. Lee et al. (1997) believe that the problem lies in the infrastructure of the supply chain itself, identifying practices such as demand forecast updating, order batching, price fluctuation and rationing, and shortage gaming as the key drivers. Where there is convergence is in the importance of reliable and timely information, although Forrester makes the point that timely information is not necessarily the
March 15, 2010
658
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
D. Power
solution on its own: Carried to its extreme, the result of more timely information can be harmful. The effect can be to cause the manager to put more and more stress on short-range decisions . . . the system improvements did not result so much from changing the type of information available or its quality nearly so much as from changing the sources of information used and the nature of the decision based on the information. (Forrester, 1961, p. 427).
Phenomena such as the bullwhip effect are key drivers of the focus managers are placing on understanding how to best facilitate flows of information between trading partners in the supply chain. The need to reduce the dynamic complexity of the supply chain is recognized, but the problem of how to reduce it, however, is not so clearly understood. 3. Strategic Information Technology Management in the Supply Chain Information is therefore critical to enable higher levels of supply chain integration, but not sufficient unless knowledge is created and applied for better decision-making (Ngai, 2007). This has led to knowledge being conceptualized as a strategic resource in a supply chain context (Hult et al., 2006; Wu, 2006). The role of technologies that enable greater access to and sharing of information in facilitating knowledge transfer is therefore both pivotal and problematic. On the one hand, the use and application of such technologies appear to be non-negotiable. On the other hand the open nature and ease of implementation have been shown to place current processes and management systems under stress (Power and Singh, 2007). The opportunities and dilemmas application of technology to the management of the supply chain offer therefore serve to highlight the importance of coherent and coordinated strategies for their application and implementation. This has led to a body of research aimed at identifying what strategies work best, and how managers can develop management systems to leverage information technology investments (van Donk, 2008). This has led to a number of propositions including the identification of the importance of organizational learning as an enabler of more effective technology outsourcing strategies (Cha, 2008); the need to account for the economic implications of investment in emerging frontier technologies such as RFID and the EPC (Bottani, 2008); the important role of collaboration in promoting the effective use and application of e-markets (Nucciarelli, 2008); and the value of IT investment in the supply chain generally (Chae, 2005; Lin, 2006); identification of the role information technology can play in facilitating strategic integration of specific functions such as procurement (Gonzalez-Benito, 2007); the importance of firm characteristics such as size in moderating return on supply chain IT investment (Jin, 2006); the firm-specific nature of many benefits being explained by the unique capabilities and resources some firms are able to access and deploy (Wu, 2006); and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Supply Chain Enabling Technologies 659
the need for firms to be aware of institutional pressures when selecting and assessing IT alternatives, particularly in terms of the potential for distortion of appropriate choice criteria (Lai, 2006). 4. Synthesis The synthesis of the findings of these studies serves to highlight the fundamental problem facing managers when they invest in IT solutions aimed at facilitating more effective management of the supply chain. On the one hand, they are confronted with problems associated with poor coordination of the supply chain as evidenced by phenomena such as the bullwhip effect, and the associated complexities inherent in supply chain structures. On the other, they are confronted with a plethora of choices in terms of technology options and applications, the benefits of which are neither easily realized or are at best contingent upon an array of organizational and contextual factors. This dilemma leads to the identification of a range of potential areas of enquiry that could serve to provide insights into where managers need to focus their attention when assessing options for investment in such technologies. 4.1. Problem: How can a Manager Best Assess Appropriate Determinants of the Extent of Implementation? Much of the literature proposes an extensive set of benefits for organizations that choose to invest the time and money to implement a range of supply chain management strategies. By way of contrast, another parallel theme that emerges is that implementation has been limited for many of the technologies and practices. This apparent paradox can partially be explained by a number of factors. Emergent themes in the literature relevant to this problem include the degree of flexibility that is required, and/or can be enabled by investment in particular technologies (Wadhwa, 2008); the ability to design supply chain systems to enable management and control at strategic, tactical, and operational levels (Manzini, 2008); the ability to strike a balance between investment in technology for integration with both the supply and demand side of the supply chain (Ketikidis, 2008); and the need to be able to simulate cost/benefit implications of technology investments to inform decisions relating to appropriate implementation strategies (Delen, 2007). 4.2. Problem: What is Meant by a Strategic Approach to Supply Chain Management Technology Choice? The strategic nature (and importance) of supply chain management initiatives is also an issue that gains considerable coverage in the literature. Predominant themes include the need to focus on developing a culture based on knowledge identification, sharing, and dissemination to be able to extract consistent returns (Hult et al., 2007); clearly understood and communicated strategic objectives such that a shared
March 15, 2010
660
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
D. Power
vision exists between trading partners (Si, 2007); understanding of the linkages and complementarities between corporate, business, and supply chain strategies (Veselko, 2008); and the pursuit of “best value” at the supply chain level is a function of the capability to coordinate and integrate strategic priorities among trading partners (Ketchen, 2007). This then leads one to speculate on what it is about a firm such that it is prepared to move beyond the limits of its own enterprise to attempt to manage interfaces with its trading partners. What differentiates a firm that is willing to try to confront Forrester’s three elements of structure, delays, and amplitude (Forrester, 1958, 1961), from one that either is conscious of them but avoids them or fails to recognize their existence? 4.3. Problem: How are Technology Choice, Strategy, and Performance Linked? Despite the attractiveness of technology as an investment option for managers, the evidence suggests that benefits do not automatically flow from supply chain management initiatives (Power and Singh, 2007). The difficulty in establishing clear linkages between particular investments and business benefits is also covered in other research studies (Gunasekaran, 2007). As well as problems identifying whether particular organizations derive benefits, it is also difficult to isolate the strategic drivers of those benefits. For instance, if a firm that has invested in a particular technology is identified as being likely to experience significant cycle time reductions, can this outcome be associated with a combination of strategic, organizational and cultural factors, or attributed to other contextual factors? Rather than just identifying that implementation can lead to certain benefits, it is important for managers to be able to establish some causal links between investment, strategic intent, and performance. Porter (2001) has stated that the Internet has many negative effects on the nature of the competitive environment, and in particular on the capability of organizations to maintain a competitive advantage over time. The argument is based on the open and accessible nature of Internet-based applications and the associated low cost of ownership. This view, to some extent, negates some of the benefits promoted as accruing from the use of technology to integrate the supply chain. In Porter’s view, the benefits will be real, but will not provide a sustainable source of differentiation over time. Further, such technologies highlight rather than obviate the need for coordinated and coherent strategies among trading partners. The problem for managers is in knowing how to link strategy, technology choice, and performance. 4.4. Problem: What are the Practical Implications of Technology Choice for Business Methods and Operating Philosophies? There is a growing body of literature that talks of the emergence of new business models, and the need for new organizational forms. The identification of three distinct e-business models — the “e-broker” (e.g., Amazon.com) (Keblis, 2006), the
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Supply Chain Enabling Technologies 661
manufacturer model (e.g., Dell Computer, Cisco Systems) (Stewart and O’Brien, 2005), and the auction model (e.g., eBay) — has been complemented by other examples including hybrid models combining aspects of direct selling (i.e., as practiced by Dell computer using the Internet) with traditional models providing high levels of customer service and support (Nucciarelli, 2008); shift from reliance on traditional factors of production to a business model driven by knowledge assets and human capital (Hult et al., 2007); and the development of competitive networks/virtual organizations that are configured for flexibility and responsiveness (Wadhwa, 2008). The likelihood of technology application in the supply chain to facilitate new business models and emerging forms creates both a problem and an opportunity for managers. The opportunity lies in the potential for competitive viability new business approaches offer, and the problem lies in making informed choices. The need for a framework around which to build such choices is further highlighted. 5. Propositions for Management Practice 5.1. Proposition 1: A “Full” Implementation will not Necessarily be Appropriate for all Organizations One assumption that organizations often make is that if competitors are implementing new technologies, they also need to do so. A common theme encountered is that comprehensive implementation across multiple functions and with many trading partners represents unquestionable “best practice” in supply chain management. Although there is evidence suggesting that organizations implementing these technologies broadly are indeed better placed to use technology to streamline supply chain operations, this does not mean that “one size fits all.” Supply chains are complex groupings of disparate trading partners and each has their own objectives, competitive contexts, and operational limitations. It would be overly simplistic to suggest that there is only one best way for technology to be implemented and applied in all cases. Each organization needs to assess how the technologies and methodologies available can be best applied in their particular context. Fundamental issues such as position in the supply chain, distance (both in time and in space) from customers and end users, relative power of partners, technological maturity of the organization (and/or the industry), nature of inputs, complexity of conversion processes, and nature of end products can all influence how an organization should approach implementation. To ensure that they are able to best capture the potential benefits of using new technologies, these factors need to be understood and accounted for, and an appropriate approach formulated. 5.2. Proposition 2: Organizational Self-analysis is a Critical Pre-requisite for Successful Implementation Rather than focusing on the technologies and their potential benefits, managers need first to focus on understanding the unique needs of their organizations.
March 15, 2010
662
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
D. Power
“Organization know thyself” would perhaps be an appropriate re-phrasing of the biblical dictum in this context. Business benefits are derived from a combination of effective strategy formulation, knowledge transfer and organizational capability, the application, and use of the technology being an outcome of this same set of factors. To be able to extract the maximum benefit, managers would therefore be far better served by analyzing and understanding these issues, rather than going straight to the technology. The importance of having an effective method for collecting, assimilating, and understanding the competitive environment is critical, as it is the ability to question basic assumptions on which a response to these conditions would be based. Developing an ability to “know what you know,” and having this known where it needs to be known, is also identified to be of fundamental importance to the realization of potential business benefits. Further, knowing what you need to know is only useful if it can be translated into effective action through the capability of the organization to implement effective strategies. The insight for managers is that these rules appear to apply to the emerging Internet-based technologies, just as they applied in the past to long-established technologies. 5.3. Proposition 3: More Information Does Not Necessarily Equate to Better Business Performance Another theme encountered in much of the literature is that the extended use of new technologies improves information flows, and that this would by definition lead to more effective supply chains and better business performance. This assumption is based on the premise that organizations will be able to interpret and use this additional information. Many organizations have significant problems dealing effectively with the information already available to them. Can we safely assume that they will be able to deal with this additional load any differently? It is not unreasonable to propose that organizational capability is a significant determinant of technology-related performance. It is also that the degree of difficulty of implementation has been significantly reduced for Internet-based methods based on open standards. The implication is that many more companies will be able to implement without incurring substantial cost penalties, and that they will therefore have access to more and more data as a result. From a practical viewpoint, these companies will be faced with the problem of dealing with, analyzing and managing this data, and turning it into value-adding information. The result is that the companies best placed to do this will be those with effective knowledge transfer processes and higher levels of management capability. The greater availability of information coupled with and enabled by technologies being easier to access cost effectively, therefore, could represent a trap for organizations if they do not develop the required capability for dealing with it effectively. Having developed this capability, it is also important to note that many organizations will falter in their ability to leverage this information. Having the information, understanding it, and planning to use it in a particular way is one thing, but implementing and realizing these plans is another. It is therefore
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Supply Chain Enabling Technologies 663
imperative that organizations looking to implement technology-based supply chain management solutions are focused less on the technology per se, than on the ability of the organization to leverage them for maximum return. 5.4. Proposition 4: The Strategy Formulation Process is More Important Than the Content of Objectives and Plans Many organizations focus on developing clearly articulated plans and objectives, mission statements, vision statements, etc. The content of plans and objectives (i.e., for implementation of technology based supply chain solutions) represents an important blueprint for implementation, but is the outcome of effective strategy development processes and the extent of knowledge of the technologies. The process by which strategic intent is challenged and developed is far more important for determining the extent of implementation, and business performance as a result, than these stated plans and objectives. As a result, the emphasis many organizations place on the development of plans and objectives can be said to be misplaced if it is not supported by an appropriate and effective process for strategy formulation. The clear implication is that organizations wishing to implement such technologies need to start by assessing the integrity of this process, and indeed by questioning some fundamental assumptions underpinning business operations. Plans and objectives will be an outcome of this process, but should not preempt it. The temptation to move quickly and act decisively should be tempered by the fundamental need to understand environmental conditions, be cognizant of business assumptions, and develop appropriate alternate strategic scenarios. 5.5. Proposition 5: Effective Change Management is a Critical Pre-requisite for Successful Implementation The requirement for organizations to be able to manage change is proposed to be an important factor in determining the success or otherwise of supply chain technology implementations. As with a number of the previous implications above, the message is for the focus to move from the technologies per se to the capability of the organization to effectively reengineer business processes. This is particularly so in the context of organizations attempting to better manage supply chains. If an organization struggles to change its own internal processes, it will have significant difficulty coming to grips with the more complex issues inherent in integrating processes across organizational boundaries. The extended use of technology to enable better integration of supply chain processes is guaranteed to increase the pressure for change to traditional ways of operating. This requirement is undiminished, and may even be amplified, by the lower total cost of ownership of Internet-based technologies. Although they make it easier to become a player in the use of such applications, more easily accessible solutions based on open standards will still need to be supported by process reengineering to produce business benefits. In this
March 15, 2010
664
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
D. Power
sense, technology represents an opportunity for supply chain process redesign rather than a “technological silver bullet.” 5.6. Proposition 6: Plan to Invest in Infrastructure to Support Investment in Technology The need to invest in infrastructure, as well as the technologies and systems themselves, is also an important emergent theme. In particular, investment in training and the reengineering of business processes are prominent issues that (organizations find) need to be addressed as implementation of information systems in the supply chain is extended. One expectation of the development of Internet-based technologies has been that there will be a reduced requirement for this type of investment due to their open and readily accessible nature. As a result, some have argued that major barriers to adoption (i.e., of long-established technologies) have been eradicated, and a new age of Internet-enabled supply chain management is upon us. Although there are indications that this opportunity does indeed exist, the need to invest in other infrastructure items such as training and change management would appear to be undiminished. The implication for management is that although emerging technologies do offer significantly reduced costs of entry, they still need to be supported by effective human resource development and change management strategies. They will also create pressure for change in the way work is done within the organization, and between trading partners. As a result, investment in these areas will be required to ensure that the benefits of use of these technologies can be realized. 6. A Model for Management Practice 6.1. Explanation of the Model This model represents significant processes and interrelationships that individual organizations need to be aware of when assessing, implementing, and monitoring the performance of technology-enabled supply chain initiatives. The model is shown in abstract in Fig. 1. The black arrows indicate paths between model components, while the dotted lines represent feedback loops critical for the operation of the model. This is not intended to be a causal model (i.e., in the sense that an Structural Equation Model (SEM) model represents theorized causal relationships), but rather an abstract representation of major processes and their inter-dependencies. The model begins by indicating that a robust strategic logic (the rationale for achieving organizational goals or strategic intent) of an organization is a prerequisite for effective knowledge management and resource management. This strategic logic can be informed by many processes including challenging underlying
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Supply Chain Enabling Technologies 665
Figure 1.
Implementation strategy model.
business assumptions through both internal and external processes (Sanchez, 1997; Sanchez and Heene, 1997). The collection and use of data capturing the assumptions relevant to strategies employed represent a critical factor in ensuring the integrity of managerial decisionmaking processes that direct decisions such as technology investments.At a strategic level of operation, the relationship between cause and effect is usually ambiguous (e.g., links between strategic intent and organizational success can be difficult to establish) (Sanchez and Heene, 1997). As such, dynamic complexity is also high. Unfortunately, access to accurate and timely data at a strategic level can be critical for the individual firm and its trading partners due to the dynamically complex nature of supply chain relationships. In simple terms, if managers are solely using internaloperating data to validate strategic logic (i.e., to inform technology investment decisions in a supply chain context), they run the risk of failing to recognize the potential links between cause-and-effect critical to getting these decisions right (Sanchez and Heene, 1997). As such, an appropriate strategic intent is critical to appropriate decision making in this context. The importance of appropriate and timely information flows between trading partners is thus given a strategic context. The model highlights the important role this strategic logic plays in informing both the management of knowledge in the organization, and the management of resources within and between trading partners. As such, it is a critical factor in being able to modify strategies for the implementation and use of enabling technologies and methods for the management of the supply chain. This process of modification is further enhanced by the four critical feedback loops connecting back into the organization’s strategic logic. As knowledge management becomes more effective, an outcome for the firm is the improvement and refinement of the process for developing strategic logic. As resource management becomes more effective, organizational understanding of the integrity of the strategic logic is enhanced. The outcomes of the implementation process need to be fed back into the strategy
March 15, 2010
666
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
D. Power
development process to inform and modify it based on actual experience. Lastly, performance measurement needs to be able to capture outcomes that can be related to the strategy enacted, and be fed back into the process by which this strategy was determined. The management of knowledge in the organization is a process that is informed by its strategic logic, and this process in turn informs the management of resources. An important feedback loop from resource management back to knowledge management represents a process of experiential learning that is critical for effective organizational learning. This feedback loop is also critical for the ongoing updating of strategic logic (through the connection with knowledge management). The knowledge management process in turn (and through the links described above with the strategy development and resource management processes) provides the primary input into the articulated plans and objectives of the firm. The position of stated plans and objectives in the model is critical, as it represents the proposition that contents of plans and objectives need to represent a blueprint for action. This blueprint should reflect the results of the processes preceding it (described above) in order for these plans and objectives to support organizational cohesion. The resource management process is informed by both strategic logic and the knowledge management process, and feeds back into both (as described above). This process also has a direct effect on the implementation of the stated plans and objectives, reflecting the identification, deployment, and use of resources needed to support this phase. The feedback loop from implementation back to resource management is also critical as it represents the need to be able to modify resource management decisions based on practical experience. Implementation also feeds back into the strategic logic of the organization, showing the need for cognitive processes to be challenged from within. This represents the ability for key stakeholders involved in implementation to question and (perhaps) cause a re-thinking of strategic assumptions based on experience with the use of the technologies. The final process in the model is that of assessing and measuring performance. Research has indicated that performance is more an outcome of the strategy development processes, organizational capability, and knowledge of the technologies, rather than the extent of their implementation. It is therefore critical for the outcomes of this process to be fed back into the strategic logic of the organization, as well as into the knowledge management process. This is important because the ongoing effect on the performance of the technologies implemented will be largely determined by the ability to assimilate and understand progress against expectations. There is no direct feedback into the resource management process. This is because this will be more effectively adjusted as a result of alterations in the strategic logic and management of knowledge in the organization, rather than preemptively. Otherwise, resource management decisions may not support strategic objectives.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Supply Chain Enabling Technologies 667
7. Practical Application of the Model 7.1. High-Level Abstraction of Interrelationships The model could be used by managers to provide a simplified insight into the important relationships between critical processes relevant to the implementation of supply chain management technologies. These relationships are sometimes understood intuitively, yet not clearly articulated. As a result, managers often find themselves faced with taking action that they suspect will not yield optimal results, but cannot clearly justify their intuition. Alternatively, actions are taken that are expected to yield significant benefits yet these are not realized, or they do provide benefits but the reasons are not easily explained. This model could be used to help managers faced with these dilemmas, although it is not expected that it can provide definitive answers or explanations in all circumstances. Business environments are far too complex for any model to do this. However, an understanding of some key factors at work, and how they can be positioned and managed to increase the likelihood of ongoing success, would be beneficial for practitioners. 7.2. First Stage Organizational Diagnostic The proposed model would also be useful as a starting point for managers wishing to identify operational disconnects and opportunities for improvement. By providing a framework against which to analyze the relative strengths and weaknesses of organizational processes, the model could point to problem areas affecting performance of technology implementations. For example, it could be identified that there is a tendency for management to make adjustments in managing key resources without reference to critical knowledge management issues, and without assessing the impact on strategic fit. On the other hand, poor quality of data or significant time delays could be identified as characteristic of the operation of critical feedback loops. Once recognized at this level, drilling down into the underlying processes will provide data relevant to identifying and eradicating causes. 7.3. High-Level Process Map The need to reengineer processes, and manage change generally, is recognized in the literature as being an important factor in determining the effectiveness of these types of technologies. The implementation of new technologies for the better management of supply chains will inevitably create pressure for change in multiple processes operating within and across firm boundaries. A common problem faced by many companies when implementing is the realization (often well into the implementation process) that they need to undertake a major organizational retro-fit to support the technology. Other organizations attempt to anticipate this problem, but have difficulty in identifying where to start. This model could be used
March 15, 2010
668
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
D. Power
as a means of identifying high-level generic processes that can be decomposed into lower levels for mapping and analysis. For example, in the area of knowledge management, this generic process could be broken down into many different sets of activities depending on the nature and context of the organization. For a large, complex company, this could represent many processes including some related to the IT-related side of managing knowledge, and some to the behavioral aspects. These could be identified, mapped, analyzed, and classified. In a smaller, less complex company, many of these processes may be less formal, but nonetheless highly important. By starting at a high level of abstraction, managers could use the model to identify areas where their focus can be best trained for maximum leverage. 7.4. Supply Chain Partner Benchmarking Tool Beyond the operation of the individual enterprise, the model could be used as a means of comparing and assessing the relative strengths and weaknesses of critical processes between trading partners. Given that the research indicated that these were important factors and relationships for individual organizations, it can be expected that the management of supply chains made up of many different organizations could be improved by a common understanding of their significance. The use of the model as a framework around which to benchmark processes and performance assessment systems could provide partners with common points of reference, as well as focal points for analysis of inter-organizational processes. For example, if an organization wishing to assess options for implementation were to use this model as a staring point, it could benchmark with other firms and/or trading partners the process by which they manage knowledge and feed outputs back into the strategy formulation process. Or they may use it to assess methods used in other companies for ensuring resource deployment does not pre-empt strategic impact assessment. Another option would be to assess the applicability of particular performance assessment methods for the requirements of individual organizations. 8. Further Research 8.1. Changing Application of Technologies As technologies for the management of the supply chain develop, use and application of these technologies will change. The capability for organizations to interact and conduct business using electronically-enabled processes is becoming a less complex and seamless process. The traditional barriers to entry to the world of electronic trading have been lowered due to open standards, the use of integration technologies, and the distributed nature of Internet-based solutions such as Web services and Simple Object Access Protocol (SOAP). At the same time, however, it
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Supply Chain Enabling Technologies 669
is unlikely that adoption and application will be universal or homogeneous across firms. Will application and adoption be identifiably different across different types of organizations (e.g., large versus small, manufacturers versus service industries, not for profit versus for profit etc.)? Will security problems continue to have a moderating effect on the rate and emphasis of adoption and use among trading partners? If so, how will be addressed and will ultimately set a limit on extent of adoption? Will there be industry sectors that will be limited in their ability to extract value from the use of these technologies due to structural and/or cultural constraints? How will the technologies evolve through time, and what will be the implications of this change for the configuration and communication protocols tying together trading partner networks? With rapid change in this area, and high expectations for radical improvement in operations, it would be extremely useful to be able to monitor and document how, where, and why the technologies are being adopted and applied. 8.2. Barriers to Adoption and Effective Use of Emerging Technologies If we accept that changes in system dynamics usually also lead to unintended consequences, it is conceivable that application of supply chain-enabling technologies could shift bottlenecks to other areas of the supply chain (e.g., theory of constraints; see Goldratt, 1984). Analysis of the bullwhip effect has shown that the interaction between structure (“effective organization structure and information sources”), delays (time between cause and effect/decision and implementation, etc.), and amplification (the inherent effects of policies) (Forrester, 1961, p. 348) is a primary determinant of the effectiveness of supply chain interactions. Extensive adoption of these technologies may reduce delays by reducing the gap between cause and effect, but it will not automatically alter the other two variables (i.e., structure and amplification). Further, it is possible that structural and policy constraints could of themselves become impediments to effective adoption and application. The ability to capture value will in many cases be subject to effects created by “organizational legacy systems” (e.g., structures and decision-making processes out of step with imperatives created by the use of new technologies). Even the capability of new technology to significantly alter the delays in supply chains is not without limits, due largely to the inertia created by investment in “IT legacy systems” in many firms. Although technologies such as Web services and SOAP are developing quickly, seamless transitions in use and application are still the exception. Moreover, there is evidence to suggest that the rapid rate of technological change could of itself serve to act as an impediment to adoption, as organizations are faced with higher levels of risk (i.e., of investment redundancy and return on investment (ROI) constraints) in purchasing and implementing the solutions. Given the level of expectation, and the potential emerging technologies provide for improved supply chain performance, the identification of major impediments, as well as potential solutions, will be of critical importance.
March 15, 2010
670
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
D. Power
8.3. Further Development and Empirical Testing of the Framework The model provides an initial framework around which managers can develop strategies for informing decisions providing higher levels of confidence in positive technology investment outcomes. Further, this model can be used as a template against which theories can be tested and evaluated. In particular, it could provide the basis for a framework for testing the some of the structural and policy (amplification)related issues identified above. The constructs represented can be further improved to capture the full complexity of their structure, and operationalized for empirical testing. The model could also provide a framework for establishing a longitudinal comparative study with the aim of identifying better practices in adoption, implementation, and use. This study could also become an important resource for participating organizations as part of an ongoing benchmarking exercise. At the same time, it would provide researchers with valuable longitudinal data on the use and application of leading edge technologies. 9. Conclusions As technologies become more open and easily applied, they will be used extensively as they promise competitive advantage through the more efficient and effective management of supply chain processes. At the same time, however, they represent a potential trap for managers thinking that they will supplant the need for effective strategy formulation, appropriate management of resources, or effective change and knowledge management systems. The management of the supply chain will be facilitated by more sophisticated tools, but the organizations that are likely to benefit most will still be able to choose, implement, and manage the appropriate set of tools in the right combinations. This chapter has sought to identify and analyze the challenges and opportunities for managers when implementing technology-enabling management of supply chains. The results also show that organizational performance (and by extension the performance of supply chains) will still be determined by issues (to a large extent) independent of these technologies. Managers hoping to find in these technologies a source of competitive advantage, without addressing fundamental issues of strategy and organizational competence, will be disappointed. Put another way, the more things change in the world of supply chain technology implementation, the more they stay the same. The firms that can be expected to extract most value from the brave new world of open system-based connectivity and Internet-enabled collaboration will be those that have been able to adapt their management processes to the requirements of their competitive domain. The significance of understanding this when moving forward cannot be underestimated, as it implies a clear focus on matching organizational capability with an understanding of technological requirements, rather than on the technologies per se. This focus should provide managers with a clearer understanding, discrimination, and know-how to enable them to make
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Supply Chain Enabling Technologies 671
more appropriate technology choices, implement them successfully, and ultimately extract value. References Barua, A and B Lee (1997). An economic analysis of the introduction of an Electronic Data Interchange system. Information Systems Research, 8(4), 398–422. Bottani, E (2008). Economical assessment of the impact of RFID technology and EPC system on the fast-moving consumer goods supply chain. International Journal of Production Economics, 112(2), 548–569. Cha, H (2008). Managing the knowledge supply chain: An organizational learning model of information technology offshore outsourcing. MIS Quarterly, 32(2), 281–306. Chae, B (2005). Information technology and supply chain collaboration: Moderating effects of existing relationships between partners. IEEE Transactions on Engineering Management, 52(4), 440–448. Chen, F, Z Drezner, JK Ryan and D Simchi-Levi (2000). Quantifying the bullwhip effect in a simple supply chain: The impact of forecasting, lead times, and information. Management Science, 46(3), 436–443. Delen, D (2007). RFID for better supply-chain management through enhanced information visibility. Production and Operations Management, 16(5), 613–624. Forrester, J (1958). Industrial dynamics, a major breakthrough for decision makers. Harvard Business Review, 36(4), 37–66. Forrester, JW (1961). Industrial Dynamics. Cambridge, MA: MIT Press. Goldratt, EM (1984). The Goal. Great Barrington, MA: North River Press. Gonzalez-Benito, J (2007). Information technology investment and operational performance in purchasing — the mediating role of supply chain management practices and strategic integration of purchasing. Industrial Management & Data Systems, 107(1–2), 201–228. Gunasekaran, A (2007). Performance measures and metrics in logistics and supply chain management: A review of recent literature (1995–2004) for research and applications. International Journal of Production Research, 45(12), 2819–2840. Hult, GT, DJ Ketchen, ST Cavusgi and RJ Calantone (2006). Knowledge as a strategic resource in supply chains. Journal of Operations Management, 24, 458–475. Hult, GT, DJ Ketchen and M Arrfelt (2007). Strategic supply chain management: Improving performance through a culture of competitiveness and knowledge development. Strategic Management Journal, 28, 1035–1052. Jin, B (2006). Performance implications of information technology implementation in an apparel supply chain. Supply Chain Management — An International Journal, 11(4), 309–316. Keblis, M (2006). Improving customer service operations at Amazon.com. Interfaces, 36(5), 433–444. Ketchen, D (2007). Bridging organization theory and supply chain management: The case of best value supply chains. Journal of Operations Management, 25(2), 573–580. Ketikidis, P (2008). The use of information systems for logistics and supply chain management in South East Europe: Current status and future direction. Omega-International Journal of Management Science, 36(4), 592–599.
March 15, 2010
672
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
D. Power
Lai, K (2006). Institutional isomorphism and the adoption of information technology for supply chain management. Computers in Industry, 57(1), 93–98. Lee, HL, V Padmanabhan and SJ Whang (1997). The bullwhip effect in supply chains. Sloan Management Review, 38(3), 93–102. Lin, C (2006). Identifying the pivotal role of participation strategies and information technology application for supply chain excellence. Industrial Management & Data Systems, 106(5–6): 739–756. Manzini, R (2008). An integrated approach to the design and management of a supply chain system. International Journal of Advanced Manufacturing Technology, 37(5–6), 625–640. Ngai, E (2007). Knowledge and information technology management in supply chain integration. International Journal of Production Research, 45(11), 2387–2389. Nucciarelli, A (2008). Information technology and collaboration tools within the e-supply chain management of the aviation industry. Technology Analysis & Strategic Management, 20(2), 169–184. Porter, MC (2001). Strategy and the Internet, Harvard Business Review, March, 63–78. Power, DJ and PJ Singh (2007). The e-integration dilemma: The linkages between Internet technology application, trading partner relationships and structural change. Journal of Operationss Management, 25(6), 1292–1310. Sanchez, R (1997). Preparing for an uncertain future: Managing organisations for strategic flexibility. International Studies of Management and Organisation, 27(2), 71–95. Sanchez, R and A Heene (1997). Managing for an uncertain future: A systems view of strategic organisational change. International Studies of Management and Organisation, 27(2), 21–42. Senge, PM (1990). The Fifth Discipline: The Art and Practice of the Learning Organization. London: Random House. Si, Y (2007). Strategies in supply chain management for the Trading Agent Competition. Electronic Commerce Research and Applications, 6(4), 369–382. Sterman, J (1989). Modelling managerial behaviour: Misperception of feedback in a dynamic decision making environment. Management Science, 35(3), 321–339. Stewart, TA and L O’Brien (2005). Execution without excuses. Harvard Business Review, 83(3), 102. van Donk, D (2008). Challenges in relating supply chain management and information and communication technology — an introduction. International Journal of Operations & Production Management, 28(4), 308–312. Veselko, G (2008). Coordinating supply chain management strategy with corporate strategy. Promet-Traffic & Transportation, 20(2), 119–124. Wadhwa, S (2008). Framework for flexibility in dynamic supply chain management. International Journal of Production Research, 46(6), 1373–1404. Wu, F (2006). The impact of information technology on supply chain capabilities and firm performance: A resource-based view. Industrial Marketing Management, 35(4), 493–504.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
Supply Chain Enabling Technologies 673
Biographical Note Dr. Damien Power is an Associate Professor of Operations Management in the Department of Management and Marketing at The University of Melbourne. Damien holds a bachelor’s degree in Manufacturing Management, a Master’s degree (by research) in Business, and a PhD focusing on developing strategic models for effective business-to-business e-commerce implementation. He is a certified fellow of APICS at the level of CFPIM. Damien has research interests that cover project management, strategic supply and procurement, supply chain management and business-to-business e-commerce, and agile and lean manufacturing systems. Damien has published over 60 refereed international journal articles and conference papers and coedited two books.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch27
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Chapter 28
Supply Chain Management AVNINDER GILL Department of Management, School of Business & Economics, Thompson Rivers University, 900 McGill Road, Kamloops, BC, V2C 5N3, Canada [email protected] M. ISHAQ BHATTI Department of Economics and Finance, UAE University & La Trobe University, Melbourne, Australia [email protected]
The present chapter provides an introductory account of supply chain management area by discussing major channels, information flows, and the main drivers and building blocks of supply chains. The chapter also introduces the decision hierarchy, performance metrics, value of information sharing and bull-whip effect in supply chain management. Finally, the paper provides a set-covering supply chain network design approach to supply chain management and illustrates it with the help of a numerical example. Keywords: Supply chain management; bullwhip effect; logistics network design; setcovering approach.
1. Introduction Competition in the business world is becoming fiercer due to global sourcing and the heightened expectations of customers. As a result, companies want to improve their customer service level and reduce costs. In the past, firms tried different approaches such as lean production, JIT, cellular manufacturing, and flexible manufacturing systems, but the efforts were localized in the sense that the improvements mostly occurred at a plant or shop floor level. The next logical step was to look outside the corporate walls and identify new areas for cost improvement. The answer came in the form of forging strategic alliances with other companies. Many companies began to pursue closer partnerships with their suppliers and client companies, and as a result, we started to see the proliferation of joint ventures, multinationals, and global alliances. Corporate reengineering strategies and the emphasis on focussing on core competencies required these companies to reduce their ownership of raw 675
March 15, 2010
676
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti
materials and allied activities. Recently, these functions have been subcontracted to companies who can perform them better and for less. This led to the phenomenon of outsourcing on a global scale and expanded the partnership networks beyond imagination. Various international trade agreements were formed. Among them are the North American Free Trade Agreement (NAFTA), European Union (EU) ASEAN, SARC, and the Gulf Cooperation Council (GCC) which further boosted networking efforts. The advancement in information technology enabled these companies to share data and information with their partners and the substantial reduction in information sharing costs in recent years acted as a catalyst to grow these networks. However, to make these partnerships profitable for everyone involved, these organizations needed an operational framework and they started to view themselves as a part of the logistics network called the supply chain. This newly formatted management paradigm extends business relationships beyond the organizational boundaries. Another development that really brought supply chain management (SCM) to the forefront is electronic (e) commerce. Business to business e-commerce is one of the fastest growing segments of the Internet-based economy and it really benefits from the supply chain approach. Besides linking different companies in an inter-firm network, the supply chain approach also made it possible to integrate and coordinate different functions of an organization in an intra-firm environment. In the absence of such integration, these organizational functions often kept on working towards conflicting objectives. As an example, marketing, distribution, manufacturing, and purchasing functions in a typical company work independently. The marketing goal of providing maximum product availability disregarded the manufacturing, distribution, and storage capabilities. Manufacturing and distribution operations mainly focused on reducing their costs through production of larger quantities of stock without paying much attention to inventory levels. Purchasing departments mainly concentrated on the consumer buying behaviors without looking at the needs and capabilities of the manufacturing system. In such a scenario, the gain of one area becomes a loss for another and the costs that are controlled in one area generally pop up elsewhere without providing any overall net benefits to the organization. This emphasizes the need to have a holistic approach toward running a business and one integrated plan is needed to accomplish this, rather than having individual plans for each department. SCM provides an approach to integrate all organizational functions and allows management to develop an overall plan requiring various departments to adhere to this plan. 2. Major Channels and Information Flows in Supply Chains Most of the operations closer to the supplier side in a supply chain are referred to as the upstream operations. The operations toward the customer side are referred to as the downstream operations. In between these two streams, supply chain can be largely divided into three major channels: supply channel, manufacturing channel,
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 677
Suppliers
Manufacturing system
Supply channel
Manufacturing channel
Distribution system
Customers
Physical distribution channel
Upstream
Downstream
Materials & goods flows Demand & information flows
Figure 1.
Channels and information flows in supply chains.
and the physical distribution channel. The supply channel mainly deals with the acquisition of raw materials. The manufacturing channel mainly consists of activities related to the conversion of these raw materials into finished goods. Physical distribution deals with distributing the finished goods to various intermediate stocking points called warehouses and to the retail locations. It may be noted that in a typical supply chain, the materials and goods move from upstream toward the downstream whereas all the information regarding demand, design requirements, and customer preferences moves from the downstream toward the upstream. These upstream and downstream flows and their categorization into three major channels have been represented in the supply chain model in Fig. 1. The above approach is functional in the sense that it divides the supply chain based on what function is performed by which channel segment. The advantage of dividing the supply chain into these channels comes from the structure it provides to group and organize various supply chain topics under three major channels. It helps to establish a one-to-one correspondence among these topics. Certain topics have more universal applications and they might span under more than one channel. One such categorization of supply chain topics is provided in Table 1. Table 1.
1. 2. 3. 4. 5. 6. 7. 8.
Basic Supply Chain Topics Under Three Channels.
Supply channel
Manufacturing channel
Distribution channel
Source choice Network design Warehouse location Warehouse design and operations Procurement planning Inventory management Transportation Packaging
Product design Routing analysis Plant location Plant layout and operations Production planning Inventory management Material handling Packaging
Market choice Network design Warehouse location Warehouse design and operations Dispatch planning Inventory management Transportation Packaging
March 15, 2010
678
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti
3. Supply Chain Basics There are quite a few definitions of the term supply chain in the literature. As they all appear to convey the same meaning, there seems to be an agreement among academics and practitioners as to what the term supply chain really means. According to the Lee and Billington (1995) definition, a supply chain is a network of facilities that procures raw materials, transform them into intermediate goods and then into final products, and deliver the products to customers through a distribution system. Ganeshan and Harrison (2008) provided another definition according to which a supply chain is a network of facilities and distribution options that performs the functions of procurement of materials, transformation of these materials into intermediate and finished products, and the distribution of these finished products to customers. Perhaps, what we need is an all encompassing definition of SCM, which should lead toward the different areas and players involved in the supply chain and capture the essence of supply chain in a more holistic sense. Let us analyze one such definition from this perspective. The definition is provided by Simchi-Levi et al. (2000) according to which, “SCM is a set of approaches utilized to efficiently integrate suppliers, manufacturers, warehouses, and stores so that merchandise is produced and distributed in the right quantities, to the right locations, in the right condition and at the right time to minimize the system wide costs while satisfying the service level requirements.”
We magnify the above definition by discussing its various features. First, such a definition presents supply chain as a set of approaches or concepts rather than an individual methodology. Secondly, these approaches are used to achieve the integration goal of supply chain so that different players or supply chain members are not working toward their individual objectives. The definition also mentions the main players in the supply chain who are suppliers, manufacturers, warehousing companies, and retail stores. This definition projects both production and distribution of materials and finished goods as the two essential functions of a supply chain. The merchandise is produced in the right quantities which puts emphasis on inventory control, demand management and forecasting. The merchandise is sent to the right location which points toward facility location, physical network design, and transportation dispatch and planning. Furthermore, it must arrive in the right condition indicating quality management as an important aspect of SCM and calls upon adequate allowance for perishable and breakable items, fulfilling packaging requirements and selecting safer transportation modes. The material must also arrive at the right time, requiring accurate production, distribution and transportation planning and the state-of-the-art technology to execute it. The important part of this definition is that, it seeks to minimize system wide costs for the whole supply chain and emphasizes global optima rather than achieving local optima for individual supply chain
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 679
partners. All of these factors are necessary to ensure adequate customer service level and this definition importance emphasizes the fulfilling the customer’s needs. 4. Decision Hierarchy in Supply Chain Management Management needs to make several decisions to facilitate the movement of materials and information across the supply chain. These decisions can be largely divided into three hierarchical levels: strategic decisions, tactical decisions, and operational decisions. Strategic decisions are typically long-range decisions, which require high level coarse data and information and most often directly result from corporate strategy. Examples of strategic decisions include deciding what to produce and sell; where to produce it; the number, location, and size of production and distribution facilities; new product offerings and their design; technology acquisition and its strategic use to gain competitive edge; forming strategic partnerships with suppliers and customers; establishing communication channels, etc. Tactical decisions are medium-range decisions sometimes overlapping across both strategic and operational decisions. They basically constitute a blue-print or road-map to accomplish the strategic decisions. Examples of tactical decisions include designing sourcing contracts; negotiating payment schedules; purchasing decisions; medium-range forecasts; medium- to long-range material, production and distribution planning; warehouse stocking policies and turnover ratios; designing transport routes and frequencies; selecting best transport modes, etc. Operational decisions are short-range decisions made over a day-to-day or possibly a week-to-week basis. They are typically needed to execute the tactical decisions and require a very fine level of data and information. Examples of operational decisions in a supply chain include day-to-day and minute-by-minute production and distribution scheduling; establishing quality control procedures; demand planning; shipping and receiving operations; worker–machine and driver– vehicle assignments; work flow balancing; maintenance of machines and facilities; short-range forecasts; day-to-day coordination across the supply chain; managing constraints at various facilities; order processing, order tracking, and order fulfillment, etc. As a road map for tactical decisions is derived from strategic decisions and operational decisions are an execution of tactical decisions, most of these decisions are linked to each other in a top-down fashion. As an example, the strategic decisions of what to produce and where to produce dictates the tactical decision of what kind of transportation mode is more suitable for those geographic locations which further leads to the operational decision of vehicle fleet assignment to local drivers. 5. Supply Chain Performance Metrics The purpose of SCM is essentially to provide a quality product or service to the customer keeping in view how, where, and when the customer wishes to receive it.
March 15, 2010
14:45
680
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti
Table 2. Objective Quality and customer service
Flexibility Expenses and costs
Assets utilization
Supply Chain Performance Metrics. Performance metric On-time delivery Percentage returns and percentage non-defectives Order fulfillment lead time Fill rate (fraction of demand met from stock) Supply chain response time Production flexibility SCM costs Warranty cost as a percent of revenue Value added per employee Total inventory days of supply Turnover ratio
Furthermore, the product must be supplied at the lowest possible cost to the customer which is achieved by running the supply chain in a cost effective manner by reducing the operational expenses of the supply chain and a high asset utilization ratio. Promoting a spirit of trust and collaboration among supply chain partners, reducing unnecessary inventory stocks through JIT and risk pooling strategies, increasing demand visibility to control uncertainty, and improving the inventory velocity through efficient material handling and transportation operations are some of the ways to achieve these objectives. Based on the argument that what could not be measured, would not be improved, supply chain objectives can be divided into four different categories and measurements, with a metric assigned for each category. These objectives and their associated metrics are given in Table 2. 6. Value of Information Sharing (Bullwhip Effect) in Supply Chains Many suppliers and retailers have observed that while the customer demand for a product or service may not vary substantially, order sizes, inventory levels and back-orders levels fluctuate considerably as we move upstream through a supply chain. Even for fairly uniform retail sales, the order placed by retailers to distributors and the orders placed by distributors to wholesalers fluctuate more and more. This increase in variability as we move up the supply chain is known as the bullwhip effect. Consider the simple supply chain consisting of a customer, a retailer, a wholesaler, a distributor and a factory as shown in Fig. 2. The orders placed by different members of the supply chain show an increasing magnitude of fluctuations as we move upstream. To understand this increased variability in customer orders, consider the wholesaler. The wholesaler receives orders from retailers and places orders with a distributor. The wholesaler does not have any direct access to customer demand information.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 681 =
Factory
Distributor
Figure 2.
Wholesaler
Retailer
Fluctuations in orders and inventory
Customers
Bullwhip effect.
To determine the order quantities that he wants from a distributor, the wholesaler would forecast the demand from the orders that the retailer has been placing with him in the recent past. As the variability in the orders placed by the retailer is more than the actual variability in customer demand, the safety stock carried by the wholesaler is more than safety stock he would need to carry if he had direct access to the customer demand information. Thus, to provide the same customer service level, the wholesaler will have to carry more safety stock than the retailer carries. The same argument also explains why the distributor would carry more inventory and order more than the wholesaler and this effect continues throughout the supply chain. This effect can be well demonstrated through the beer distribution game developed by MIT’s Sloan School of Business in the 1960s. As the oscillating demand and its amplification reminds one of a cracking whip, the effect was called the bullwhip effect. 7. The Factors Contributing to the Bullwhip Effects Though the list of factors contributing to the bullwhip effect is uncountable, some of the factors detailed below are very important and we will consider these briefly. 1. Demand Forecasting: Most of the supply chain planning is based on forecasts and forecasts are based on past statistical distributions. Therefore, they are rarely perfectly accurate. To see the connection between forecasting and the bullwhip effect, we need to look at inventory control strategies at different nodes of a supply chain. One such policy is the min–max policy (or s-S). This means whenever the on-hand quantity falls below a minimum level or reorder point, the facility raises its inventory to a target, or max, level. Managers use simple forecast smoothing techniques like moving average to update demand and the standard deviation of demand. As these are updated, min or reorder level will be changed, forcing fluctuations in the order quantities at the downstream end which be amplified and move toward the upstream as discussed earlier. 2. Lead Time: It has been found that larger lead times magnify the variability. This is a result of the safety stock formula in inventory management. To calculate safety stocks and reorder points, we multiply the average demand and its standard
March 15, 2010
682
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti
deviation by the lead times. For a very large lead time, even a small change in demand variability implies a large change in safety stocks and reorder points which further results in a large fluctuation in the order quantity. 3. Batch Ordering: If a retailer uses batch ordering even for a uniform customer demand, the wholesaler observes a large order from the retailer followed by several periods of no ordering, then another cycle of larger orders and no ordering. Demand consolidation further adds to the uncertainty. This gives a greatly distorted picture to the wholesaler of the demand uniformity. The wholesaler gets a very irregular ordering pattern from the retailer, which forces him to carry high safety stocks. 4. If prices fluctuate, retailers indulge in forward buying and stock up when prices are lower. Promotions and quantity discounts further add to the bullwhip effect. Over-reacting to anticipated material shortage can also contribute to bullwhip effect. 8. Strategies to Manage the Bull-Whip Effect It is quite clear that the bullwhip effect is an undesirable phenomenon in supply chains. This phenomenon results in larger safety stocks than needed, at all the stages of a supply chain; it results in inefficient production quantities; it decouples the supply chain members and promotes a lack of trust in the information shared; furthermore, it leads to suboptimal utilization of the distribution resources. Just as a slight increase in demand can amplify demand fluctuations upstream, an artificial reduction in demand at the downstream end can also trigger suppliers to cut down on supply and create a shortage resulting in poor customer service. Similarly, a rumor of supply shortage upstream can also travel and magnify downstream, forcing customers to practise forward buying, thereby creating a shortage. Poor customer service, the erosion of a company’s credibility and image can lead to contractual penalties and severely affect the revenue stream. Furthermore, the hiring, training and laying-off of employees to manage demand variability incurs costs. All these factors add up to substantial financial losses. Then the next logical question is, what types of strategies companies have at their disposal to deal with this effect? Manufacturer and retailers use various strategies to cope with the bullwhip effect and some of them are discussed as follows. 1. Reduce Uncertainty: The bullwhip effect mostly occurs in a demand-driven supply chain because uncertainty about demand information is the main cause of the bullwhip effect. Reducing this uncertainty is a solution to dealing with the effect. Many companies seek to reduce this uncertainty by centralizing demand information through out the supply chain and enhancing the visibility of customer demand as far as possible in the supply chain. This concept of providing each stage of the supply chain with complete information about actual customer demand is called global demand visibility. However, errors in forecasting methods or different buying practices may still leave some bullwhip effect. One
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 683
way to deal with this problem is to establish demand-driven or actual-orderdriven supply chains. Several manufacturers use the Kanban pull system to accomplish this. Another example in the retail sector that established demanddriven supply chain is the Wal-Mart stores. Wal-Mart stores transmit point-ofsale data from cash registers directly to their headquarters. This same data is used to consolidate shipments from distribution centers to stores and from wholesale suppliers to distribution centers. Such a high visibility JIT system allows all the supply chain members to have a first-hand knowledge of the point of sales information without involving any layers that could potentially amplify fluctuations. Readers might appreciate the role played by information technology to enable such a supply chain. 2. Reduce Variability: The bullwhip effect can be reduced by focusing on the origin of order fluctuations and eliminating variability in the demand process. Such a strategy is based on the reason that if a fluctuation is not allowed to be generated, it cannot be propagated across the supply chain. For example, some retailers such as Zellers or Wal-Mart use slogans like “everyday low prices” to stop generating the variability in demand at the point of sale so that the variability is not propagated and amplified. Restricting returns to fewer days and discouraging order cancellations are other tools to reduce variability. 3. Lead Time Reduction: As larger lead times magnify variability as discussed above, reducing the lead times is another strategy to reduce the amplification of demand variability. When the variation is multiplied by the reduced lead time in the safety stock formula, it results in a relatively short fluctuation in the safety stock and helps to contain the bullwhip effect. 4. Strategic Partnerships: Engaging in strategic partnerships also helps to reduce the bullwhip. For example, under vendor managed inventory (VMI) systems, your supplier manages and controls your inventory. In such a scenario, your VMI supplier has first hand information from your customer, bypassing you in the chain. In many partnership arrangements, the partner companies pool together their planning resources and work closely in a team environment to develop a uniform plan for the supply chain. These types of arrangements help to reduce the bullwhip effect. Attempts have been made to measure the bullwhip effect based on statistical variability. For details and derivations of the results, the reader is referred to Chen et al. (2000). Here, we provide the basic formula for quantifying the bullwhip effect. If the variance of customer demand seen by the retailer is Var(D), and the variance of the orders placed by that retailer to the wholesaler is Var(Q), then the amount of amplification of this variation is given by the following ratio. 2L 2L2 Var(Q) =1+ + 2 Var(D) p p where p is the number of observations and L is the lead time.
March 15, 2010
14:45
684
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti
When p is large and L is small, the bullwhip effect is negligible, providing justification for the strategies that when more accurate information is available (larger sample size) and lead times are reduced, the bullwhip effect can be reduced. 9. Supply Chain Performance Drivers Supply chain structure is determined by the design, planning, and operation of four major drivers. • • • •
Transportation management; Information technology management; Inventory management; and Supply chain network design.
Together, these four drivers determine the responsiveness, speed and cost efficiency of the entire supply chain. The sound management of these four drivers allows the company to properly align customer requirements, business objectives, and supply chain strategy. Each of these four drivers has a unique role to play in the supply chain. A network design provides a cost-effective framework to realize the supply chain. Inventory management helps to populate that framework and provides appropriate buffers. Transportation management provides the physical links in the network and helps to run it; and information management provides the soft links, triggers the network and enables the supply chain. 9.1. Transportation Management As choosing the right combination of transportation modes and routes determines the effectiveness in delivering the goods to the right place and at the right time, transportation management has a major impact on the supply chains’ responsiveness and effectiveness. Furthermore, the strategic decision of selecting the right transportation mode impacts the inventory decisions through the explicit trade-off between the operational costs of running a particular transport mode with the associated cost of carrying the inventory for that particular mode. As an example, airlifting products to the warehouses or airports may be faster and have lower in-transit inventory and safety stock requirements, but this mode of transportation is expensive. On the other hand, rail and sea transportation are cheaper choices but they carry intransit inventory which is proportional to their larger lead times and the associated uncertainty in this transportation mode requires larger safety stocks. Transportation costs typically represent 30% of the total logistics costs; therefore, sound decisions made in this area can result in substantial cost savings. These savings accrue from economical shipment sizes, shipment consolidation and break-bulk, mode choices, routing and scheduling efficiency, etc. Some of the important inputs affecting transportation decisions are geographic locations to be served, customer service level
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 685
required, available transportation choices and lead time requirements, as well as handling and safety requirements. 9.2. Information Technology Management Information technology is an enabling mechanism for the supply chain and acts as a nervous system by sending signals across the supply chain. It also facilitates sharing the information in supply chain network in real time. There are various ways in which IT can contribute to SCM. First, e-commerce has made the modern business environment truly dynamic where orders are placed, modified and canceled through the click of a mouse. In this environment, companies must improve their agility, flexibility, and responsiveness to changing customer requirements. Information technology enables companies to share and convey these changing customer requirements to the planning authorities in real time so that the supply chain can respond to these changes in time. Second, to improve agility and stay lean, there has been a trend among companies to outsource some of their value-adding activities. This has resulted in virtual organizations and call centers which are run through state-of-the-art information technology. Third, supply chains are not confined to a particular region, but are spread across the globe. Although it is important to integrate the functions of different departments in an intra-firm environment, integration is even more important for coordinating and synchronizing the activities of all the customers, distributors, suppliers, and wholesalers on a global scale. It is impossible to achieve such integration without the information sharing capabilities of IT systems. IT has made a major contribution to the success of supply chains in areas like business-to-business e-commerce, IT integration, online order placements, demand consolidation, order processing and fulfillment, global visibility, data and information flows, reporting and feedback control systems, tracing and tracking of shipments, etc. 9.3. Inventory Management The inventory is a stock of goods. The objective in inventory management is to strike the right balance between inventory costs and customer service levels. The physical location, timing, and quantity of these stocks directly impacts on the responsiveness as well as the cost of the supply chain and an effective inventory management policy is needed to stock the right amount of product at the right locations and at the right time. Several practitioners view inventory as a necessary evil. Too much inventory and a firm has tied up its money in unnecessary stock of raw materials, unfinished, or finished but unsold products. This working capital is not available for improvement or expansion activities. Besides, the cost of carrying inventory varies from 20% to 40% of the value of the stock itself. With too little inventory, customer service will suffer because of production stoppage due to unavailability of materials; and the customers will be dissatisfied because of out-of-stock situations. If demand for a
March 15, 2010
686
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti
product could be known precisely, and products made and shipped instantaneously with zero manufacturing and transportation lead times, there would probably be no need for keeping inventory stocks. Unfortunately, the manufacturing or distribution systems with zero lead time either do not exist or they are not practically viable. Lead time or time lag in the supply chain necessitates the need to maintain stock while the materials and products are moving from the suppliers toward the customers. Although carrying too much inventory substantially adds to the operational cost of a supply chain, a certain amount of inventory is always necessary. Inventory performs various functions and has several uses in the supply chain. These functions are discussed below. 9.3.1. To meet anticipated demand Anticipation stocks of finished products are needed to satisfy planned or expected customer demand. This stock is kept due to positive transportation lead times as discussed above. 9.3.2. To coordinate demand–supply seasonality Firms with high seasonal production and a constant demand or with high seasonal demand but a constant production rate have to coordinate demand and supply. Canned vegetable or fruit jam producers are forced to stockpile during the growing season to meet a rather uniform demand throughout the year. Conversely, air conditioners, winter jackets or Christmas greeting cards are all seasonal items where manufacturers have to stockpile their uniform production to meet seasonal demand of these products. 9.3.3. Buffer stocks to decouple supply chain entities Firms maintain a certain amount of stock as a buffer between successive supply chain members to maintain continuity of the supply that would otherwise be disrupted by a shut down of a portion of the supply chain due to accidents, breakdowns or setup change-over. Buffers allow the rest of the supply chain to continue temporarily while the problem is being solved. Typically, there are three types of supply problems faced in supply chains: quality, quantity, and lead time. Quality problems result in a shipment that has a high percentage of defective parts. Quantity problems refer to shipments which fall short of the requested amount and lead time problems refer to problems where a shipment arrives late. The safety stocks help to provide a cushion against quality, quantity, and delivery lead time problems. However, too much safety stock adds to the cost. Therefore, finding and eliminating the sources of disruptions and having a reliable supply greatly reduces the need for decoupling buffer stocks and lowers the cost.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 687
9.3.4. For order cycles and quantity discounts To minimize purchasing and delivery costs, firms often buy in large economical lot sizes. Similarly, it is economical to produce in larger batches because of the reduced set-up costs. The excess amount is stored as inventory for later use. Therefore, inventory storage allows a firm to produce or buy in economic lot sizes allowing some cost reduction. Sometimes, the suppliers may offer quantity discounts if materials are purchased in larger quantities. However, these cost savings must be compared with the additional cost of holding the stock to see if economical sizes actually provide any net benefits. In some cases, the cost of holding the stock comprising interest rates and the cost of running the storage facility could be substantial. 9.3.5. To hedge against price increase A firm suspecting a substantial increase in the price of its materials is likely to buy in bulk and stock it for later use. Such forward buying results in an inventory stock. 9.3.6. Work-in-process inventory Work-in-process (WIP) inventory is the stock of raw materials or semi-processed goods that are sitting in the manufacturing plants either being processed or waiting to be processed. The reasons for having WIP inventory is that the manufacturing operations are not instantaneous. They take some time to be completed, which means there will always be some product either being processed or waiting to be processed. Reducing the manufacturing cycle time by ensuring a better flow on the shop floor helps to reduce the WIP inventory. 9.3.7. In-transit inventory In-transit inventory is similar to WIP inventory but which happens due to positive production lead times. We also have a similar inventory of finished goods, called in-transit inventory, that happens due to positive transportation times. This is the stock of inventory that is moving in trucks, trains or ships because it takes time to move it from point A to point B. Clearly, if transportation lead times were zero, this inventory would not exist. It is quite clear that inventory increases costs but at the same time certain stock levels need to be maintained. The next question is, what kind of metrics do companies apply to assess their inventory situation? Two important metrics used for this purpose are inventory turnover ratio or average number of days to sell or dispose of that inventory. Inventory turnover ratio is the cost of goods sold divided by the average dollar amount of inventory held, where average the dollar amount of inventory is the mean of beginning inventory and the ending inventory. This ratio estimates how
March 15, 2010
688
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti
many times the inventory turns over per year and gives a measure of the inventory management effectiveness. A factory that has an annual turnover of two is keeping six months stock on hand and if it changes its turnover ratio from two to four, it can increase its inventory effectiveness by as much as 100 per cent. The average days to sell inventory is the number of days a year divided by the inventory turnover ratio. This operational measure tells the manager how many days they would be able to continue operations if there was a disruption from the supply side.
10. Supply Chain Network Design The overall problem of supply chain network design deals with methods that help to determine the location of production plants, warehouses, suppliers and retailers, and decides on the paths and routes that move the product through the supply chain. In this section, we focus on a subset of this overall problem called warehouse location and retailer allocation. The model presented here follows the work of Gill and Bhatti (2007). Problems related to warehouse location and retailer allocation are commonly encountered by logistics managers at the supply chain design phase. These problems influence the form, structure, and efficiency of the entire supply chain network. General warehouse location decisions involve determining the number, location, and capacity of the warehouse facilities to be used. Decisions related to the number and location of warehouses assume more importance because capacity can be increased at a later stage but the location of warehouses involves substantial capital investment and it is harder to change the network configuration. Therefore, the static nature of this decision dictates the performance of the supply chain network in the future. On the other hand, allocation decision involves assigning retail outlets to the locations determined and is more of a dynamic decision that can be altered in the future. Traditionally, the location and allocation decisions are made simultaneously. But given the static nature of location decisions and the dynamic nature of the allocation decisions, it is imperative to treat them as two separate stages of an overall approach. This breakdown into two stages further helps to manage and solve the problem especially if the size of the problem is large and the allocation decisions have to be frequently reviewed. This is especially true for natural gas, petroleum, lubricant, and oil products distribution in developing countries where the retail outlets and gasoline fuel stations for such products are opening up at an increasing rate. As evident from the above discussion, warehouse location acts as a prelude to the overall process to the supply chain design and can have far-reaching effects on the performance of the logistics and supply chain system. There are two relevant costs in the problem that trade-off with each other: the product distribution cost and the warehouse capital cost. When the quantities are shipped as containerized in truck loads or when the transportation cost
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 689
structure charges a truck load minimum for partial quantities, product distribution cost mainly becomes a function of the distance traveled. However, under a partial load price structure (especially in consolidation warehouses where several retailer orders can be consolidated under same shipment), it is common to express the pricing in terms of load distances such as ton-mile, where a ton-mile is the amount of transportation activity to move a ton of material over a distance of one mile. To keep the model general, we will express the transportation activity in load-distances as the pure distance-based approach is a special case of this general model when the minimum charged quantities are in truck loads. The second cost, i.e., warehouse capital cost may have several components such as loading and unloading costs, storage and retrieval costs, rental or real estate costs, energy costs, cost of warehouse manager, secretary, telephone line, etc. The loading and unloading activities and hence the loading and unloading costs are fairly constant as the total demand coming from all the retailers is not a function of the number of warehouses. Therefore, these costs are not relevant to our model. However, the remaining costs are dependent on the number of warehouses opened and hence they constitute the relevant warehouse capital costs. It may further be noticed that warehouse capital costs may not have an exact linear relationship with the number of warehouses but these costs certainly depend on the number of warehouses. Therefore, minimizing the number of warehouses effectively minimizes the warehouse capital costs. Based on the assumption that management would identify only those candidate locations for which these capital costs are reasonably uniform, comparable and acceptable, for the purpose of this high-level strategic analysis, we attempt to minimize the number of warehouses to minimize the warehouse capital cost. Furthermore, the intent of the model is not to select the warehouse site but to select the general warehouse location assuming that infrastructural considerations have been taken into account while identifying the candidate locations for warehouses. The ensuing discussion brings out the inherent trade-offs present in the location and allocation decisions. If we minimize the load distance traveled by the fleet vehicles disregarding the number of warehouse locations to be opened, the solution will give a highly decentralized distribution system similar to the one shown in Fig. 3. Such a system will have the number of warehouse locations equal to the number of retail outlets, with one warehouse at each retail location resulting in substantial warehouse capital investment. On the other hand, if we try to minimize the number of warehouse locations to be opened without giving any consideration to the load distance traveled by the fleet vehicles from the warehouse locations to the retail outlets, the solution will assign all the retailers to a single warehouse location as shown in Fig. 4. Such a centralized system is also undesirable as it results in a substantial cost to distribute the products to the retailers. Therefore, this problem can be viewed as one of determining the optimal trade-off between a lower distribution cost and lower warehouse capital cost as a result of fewer number of warehouse locations. Traditionally, this trade-off has
March 15, 2010
690
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti Location 1
Location 2
Location 3
Warehouse 1
Warehouse 2
Warehouse 3
Retailer 1
Retailer 2
Retailer 3
Figure 3.
Location n ...
Warehouse n
Retailer n
Highly decentralized product distribution system. Location 1 Warehouse 1
Retailer 1
Figure 4.
Retailer 2
Retailer 3
...
Retailer n
Highly centralized product distribution system.
been controlled by assigning the maximum number of warehouse locations as an exogenous parameter based on some budgetary restrictions and minimizing the distance traveled. The set covering approach presented here controls this trade-off through a threshold load distance concept, which helps to determine whether a retailer qualifies or not for assignment to a warehouse location. Such a threshold load distance can be based on a reasonable driving distance for a truck load quantity per shipment. It can also be based on transportation employee union agreements, company policies, and cost considerations of overnight stays for vehicle fleet and drivers, especially when the fleet is rented. The model controls the distribution cost through this threshold load distance and minimizes the number of warehouse locations as an objective. 10.1. Warehouse Location and Retailer Allocation Model The mathematical modeling seeks to minimize the number of warehouse locations selected while ensuring that each retailer is assigned to exactly one selected warehouse location and such an allocation satisfies the maximum permissible load distance from a retailer to a warehouse. The model can be mathematically represented as follows: Find matrix x and vector y to Minimize
n j=1
yj
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 691
subject to ldij · xij ≤ ldmax n
xij = 1
∀i = 1, 2, 3, . . . , m; j = 1, 2, 3, . . . , n ∀i = 1, 2, 3, . . . , m
j=1
(xij − yj ) ≤ 0
∀i = 1, 2, 3, . . . , m; j = 1, 2, 3, . . . , n
xij , yj ∈ {0, 1}
∀i, j
where m = number of retailers n = number of candidate warehouse locations xij = 1 if retailer i is assigned to warehouse j, 0 otherwise yj = 1 if warehouse j is selected, 0 otherwise ldij = load-distance of retailer i from warehouse location j expressed in units such as tons-miles or truck load-kms ldmax = maximum threshold load-distance beyond, which a retailer cannot be assigned to a warehouse due to reasons such as union policies, costs, etc. Such a mathematical model results in the following problem size: Number of constraints = m(2n + 1) Number of variables = n(m + 1). To gain some appreciation for the problem size, a modest problem with 50 retailers and 10 candidate locations for warehouses will give rise to 1050 constraints and 510 variables. Therefore, it is impractical to solve real problems that are even larger in size, using an integer linear programming approach. The set covering heuristic approach breaks down this problem into a location stage and an allocation stage to manage the size as well as to separately deal with the static nature of location problem and dynamic nature of allocation problem. The approach is more suited for large-sized practical problems. The following section presents this set coveringbased heuristic solution approach. 10.2. The Solution Approach As discussed earlier, the approach is two-fold. First, the warehouse locations are chosen from the available set of candidate warehouse locations, which can cover the retail locations based on a per-assigned maximum threshold load distance. Then the retailers are allocated to these chosen locations in a manner so that the load distance of a retailer to its assigned warehouse is minimum. The steps of the procedure are as follows:
March 15, 2010
692
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti
Stage 1. Location of warehouses The location of warehouses involves two steps. First to construct a binary coefficient matrix to identify the potential locations, then selecting the actual warehouse locations using a mathematical programming model. Step 1. Construction of a binary coefficient matrix Based on the maximum permissible load distance, ldmax , a binary coefficient matrix [αij ] is prepared, which is to be used as an input to the mathematically programming model of Step 2. The idea of constructing a binary matrix is to identify potential warehouse locations that are capable of covering the retailers falling within the maximum permissible load distance. The following relation can be used to construct this binary matrix: αij = 1
if ldij ≤ ldmax or 0 otherwise;
∀i = 1, 2, 3, . . . , m; j = 1, 2, 3, . . . , n. Step 2. Set covering mathematical model Using the binary coefficient matrix in Step 1 as an input, the best warehouse locations to cover all the retailers are selected based on the following 0–1 set covering mathematical programming model: Find vector y to Minimize
n
yj
j=1
subject to n
αij · yj ≥ 1
∀i = 1, 2, 3, . . . , m
yj ∈ {0, 1}
∀j = 1, 2, 3, . . . , n
j=1
The objective function in the above formulation expresses the minimization of the number of warehouse locations selected. The constraint set ensures that each retailer is covered by at least one warehouse, but it will be assigned to exactly one warehouse in the allocation procedure of Stage 2. Such a formulation has m constraints and n variables but the actual number of constraints could be even less after the elimination of redundant constraints. Stage 2. Allocation of retailers to warehouses Retailer allocation is done according to the following procedure. Step 1. Identify a set θ = {j} such that yj = 1. Step 2. Consider a submatrix of load distance matrix [ldij ] for those j ∈ θ. Step 3. Set i = 1
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 693
Table 3.
Distance Matrix (Distance in km). Warehouses
Retailers
1
2
3
4
5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 325 50 75 400 200 180 95 280 260 85 250 80 65 300
325 0 450 375 60 250 230 290 85 65 210 75 200 190 105
50 450 0 95 350 340 310 150 340 320 145 330 135 110 210
75 375 95 0 330 310 290 130 310 290 125 305 115 145 130
400 60 350 330 0 210 190 310 70 55 185 65 170 230 195
Step 4. Find ldij∗ = min(vector ldij ) for j ∈ θ, i.e., find the minimum entry in the ith row. If the column index for this minimum entry is j ∗ , assign ith retailer to ∗ j warehouse. Step 5. While i < m, set i = i + 1 and repeat step 4, i.e., we continue to repeat Step 4 until all the retailers have been assigned. Numerical example To illustrate the approach, consider the following numerical example where we have to choose among 5 warehouses and assign 15 retailers to them. The distance matrix between these possible warehouse locations and retailers is presented in Table 3. Stage 1. Location of warehouses Step 1. Construction of a binary coefficient matrix Based on the maximum permissible distance of 250 km, a reasonable oneway driving distance to be covered by a vehicle in a day, let us construct the following binary coefficient matrix as shown in Table 4. Step 2. Set covering mathematical model Using the binary coefficient matrix in Step 1 as an input and after eliminating the redundant constraints, we obtain the following 0–1 set covering formulation: Minimize
y1 + y2 + y3 + y4 + y5
March 15, 2010
694
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti
Table 4.
Binary Coefficient Matrix. Warehouses
Retailers
1
2
3
4
5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 0 1 1 0 1 1 1 0 0 1 1 1 1 0
0 1 0 0 1 1 1 0 1 1 1 0 1 1 1
1 0 1 1 0 0 0 1 0 0 1 0 1 1 1
1 0 1 1 0 0 0 1 0 0 1 0 1 1 1
0 1 0 0 1 1 1 0 1 1 1 1 1 1 1
subject to 1y1 + 1y3 + 1y4 ≥ 1 1y2 + 1y5 ≥ 1 1y1 + 1y2 + 1y5 ≥ 1 1y1 + 1y2 + 1y3 + 1y4 + 1y5 ≥ 1 1y1 + 1y5 ≥ 1 1y2 + 1y3 + 1y4 + 1y5 ≥ 1 yj ∈ {0, 1}
∀j = 1, 2, 3, 4, 5
Note that the above formulation has all the coefficients as 0–1 and the right-hand sides are 1. In such cases, the solution to LP relaxation problem frequently produces 0–1 solutions without actually restricting the variables as 0–1. The above formulation was solved to get the following solution y1 = 1,
y2 = 0,
y3 = 0,
y4 = 0,
y5 = 1
The solution indicates that locations 1 and 5 should be chosen for warehouses.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 695
Stage 2. Allocation of retailers to warehouses Step 1. Identify θ = {j} such that yj = 1. From above solution θ = {1, 5}. Step 2. Consider a submatrix of the distance matrix [dij ] for those j ∈ θ, i.e., we consider a submatrix of matrix [dij ] with columns 1 and 5 only. Step 3. Set i = 1. Step 4. Find dij∗ = min(vector dij ) for j ∈ {1, 5}. If the column index for this minimum entry is j ∗ , assign ith retailer to j ∗ warehouse. ∗ = min(d , d ) = d . Therefore, we For example, for i = 1, d1j 11 15 11 assign retailer 1 to warehouse 1. Step 5. While i < m, set i = i + 1 and repeat Step 4, i.e., we continue to repeat Step 4 until all the 15 retailers have been assigned. Steps 4 and 5 result in the following allocation of retailers to the warehouses Warehouse 1: Retailers {1, 3, 4, 6, 7, 8, 11, 13, 14} Warehouse 5: Retailers {2, 5, 9, 10, 12, 15} 10.3. Distance versus Number of Warehouses The threshold distance can be varied in a range, for example, from 20 per cent to 80 per cent of the maximum distance value to observe its effect on the number of warehouse locations. The results of this analysis are shown in the following Fig. 5. 10 9
No. of warehouses
8 7 6 5 4 3 2 1 0 20
30
40
50
60
70
80
Threshold distance (% of max. distance)
Figure 5. Threshold distance versus number of warehouses.
March 15, 2010
696
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
A. Gill and M. Ishaq Bhatti
The above results show that with an increase in the maximum driving distance, the number of warehouses reduces. When we increase the maximum driving distance, we make it easier for a retailer to qualify for this threshold distance, and as a result more retailers filter through this threshold distance and the warehouse can cover a great number of the gas stations. As a result, only a few warehouses are needed to cover the entire set of retailers. It may be noted that sometimes political, geographical or other qualitative factors may play a role in location decisions and these factors cannot be easily quantified. Unless such factors are quantified and a comprehensive performance criterion is developed to evaluate these candidate locations, human judgment will stay an integral part of the location decisions. References Ballou, RH (1992). Business Logistics Management. Englewood Cliffs, NJ: Prentice Hall. Chen, F, Z Drezner, JK Ryan and D Simchi-Levi (2000). Quantifying the Bullwhip effect in a simple supply chain: The impact of forecasting, lead times, and information. Management Science, 46(3), 436–443. Ganeshan, R and TP Harrison (2008). An Introduction to Supply Chain Management. Penn State University. URL: http://silmaril.smeal.psu.edu/misc/supply chain intro.html Gill, A and MI Bhatti (2007). Optimal model for warehouse location and retailer allocation. Applied Stochastic Models in Business & Industry, 23, 213–221. Janyashankar, M, S Swaminathan, F Smith and Norman MS (1996). A multi agent framework for modeling supply chain dynamics. Technical Report, The Robotics Institute, Carnegie Mellon University, 1996. Lee, HL and C Billington (1995). The evolution of supply-chain-management models and practice at Hewlett-Packard. Interfaces, 25, 42–63. Simchi-Levi, D, P Kaminsky and E Simchi-Levi (2000). Designing and Managing the Supply Chain: Concepts, Strategies and Cases. McGraw-Hill.
Biographical Notes Dr. Avninder Gill obtained his Master’s and PhD degrees from University of Manitoba, Canada and is currently engaged in teaching and research at Thompson Rivers University, Canada in Supply Chain Management, Operations Management and Optimization areas. In the past, Dr. Gill has taught at various universities in Canada, China and Oman. He has also worked in the industry as a manufacturing engineer and supply chain consultant. He has extensively published in leading research journals and international conference proceedings. His work has appeared in International Journal of Production Research, International Journal of Operations and Production Management, Computers & Industrial Engineering, International Journal of Systems Science, etc. M. Ishaq Bhatti is the Director of Islamic Banking and Finance programme at Latrobe University (LTU), Australia. He is an author of more than 70 articles,
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
Supply Chain Management 697
three books and a member of the editorial board of various journals. His major areas of research, scholarship and teaching are in Islamic Finance, Quantitative finance, Econometrics, and financial forecasting. He is a winner of His Majesty Sultan Qaboos (HMSQ) 2004 best researcher Awards, a HMSQ large Grant, and a Government of Pakistan Grant in Finance training. Recently, he has been co-awarded a Victorian Dept of Education ESL financial modelling consultancy project and Australian Research Council Discovery Grant.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch28
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
Chapter 29
Measuring Supply Chain Performance in SMES MARIA ARGYROPOULOU∗,¶ , MILIND KUMAR SHARMA† , RAJAT BHAGWAT‡ , THEMISTOKLES LAZARIDES§ , and DIMITRIOS N. KOUFOPOULOS∗, ∗ Brunel Business School, Brunel University, Uxbridge, Middlesex UB8 3PH, UK † Department of Production and Industrial Engineering, M.B.M. Engineering College, Faculty of Engineering and Architecture, J.N.V. University, Jodhpur, Rajasthan, India ‡ Department of Mechanical Engineering, M.B.M. Engineering College, Faculty of Engineering and Architecture, J.N.V. University Jodhpur, Rajasthan, India § Department of Information Technology Applications in Administration and Economy Technological Institute of West Macedonia, Grevena, Greece ¶ [email protected] † [email protected] § [email protected] [email protected] GEORGE IOANNOU Management Science Laboratory Athens University of Economics and Business Evelpidon 47 a Athens 113–62, Greece [email protected]
This chapter presents the experience of the implementation of the supply chain management (SCM) scorecard at a Greek small- and medium-sized enterprise (SME) using the four perspectives: financial, customer, internal business process, and learning and growth, which are translated into corresponding metrics and measures that evaluate supply chain performance. The approach fosters a simple performance measurement (PM) and enhances decision making by linking financial and non-financial measures. On the other hand, it
699
March 15, 2010
700
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
M. Argyropoulou et al. reveals that when managers need to improve performance on so many different business operations, trade-offs cannot be easily decided. Keywords: Supply chain management (SCM); small and medium-sized enterprise (SME); performance measurement (PM); balanced scorecard (BSC).
1. Introduction Traditionally, performance measurement (PM) is defined as the process of quantifying effectiveness and efficiency of action (Neely et al., 1995). In other words, measuring performance means transferring the complex reality of performance into a sequence of limited symbols than can be communicated and reported under similar circumstances (Lebas, 1995). In modern business management, PM assumes a far more significant role than quantification and accounting. PM can provide important feedback information to enable managers to monitor performance, reveal progress, enhance motivation and communication, and diagnose problems (Waggoner et al., 1999). For any business activity, such as supply chain management (SCM), identifying the required performance measures should be an integral part of any business strategy (Bhagwat and Sharma, 2007a). This issue becomes even more important for small- and medium-sized enterprises (SMEs) which do not emphasize PM and continuous improvement as much as large companies do (Sharma and Bhagwat, 2006). In response to the important gap in the existing knowledge of PM systems in SMEs, this work aims to shed light on the management of supply chains in SMEs, providing managers with a fast and rigorous methodology that can help them improve their day-to-day operations. Our applied framework is based on a previous methodology developed by Bhagwat and Sharma (2007a, b). Based on the traditional Kaplan and Norton Balanced Scorecard, the authors developed a SCM scorecard that measures and evaluates supply chain operations from four perspectives: finance, customer, internal business process, and learning and growth (Kaplan and Norton, 1992, 1996a, b). This framework was applied to three Indian SMEs and the authors found that the metrics in a certain category contradicted others in different business areas. Nonetheless, they concluded that the approach is beneficial for managers seeking general SCM improvement. This work articulates the experience of the implementation of a balanced SCM scorecard to a specific SME in Greece. The remainder of this chapter is organized as follows. The next section reviews the literature on performance measures and on the performance measurement systems (PMS) applied by SMEs. This is followed by a brief description of the balanced scorecard (BSC) and the reasons of its selection out of many other suitable frameworks. The chapter continues with our methodological approach and implementation section where we describe the adaptation of the traditional BSC to supply chain PM and how the new framework has been applied in the case organization. We then present our analysis of the results. The study finishes with a critical review of the BSC approach.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
Measuring Supply Chain Performance in SMES 701
2. Literature Review 2.1. Performance Measurement Researchers have defined PM from different perspectives since the definitions of performance measures vary from author to author. Some of them are given as follows: the process of quantifying the efficiency and effectiveness of action (Neely et al., 1995), or the process of evaluating performance relative to a defined goal (Rose, 1995), which means that PM is not just a means of observing the past data but also a tool for leading the organization into a better future (Chan et al., 2006). In this regard, PM is executed through different performance measures and represents the enabler for organizations to plan, track/monitor the implementation of their plans (through reporting, benchmarking etc.), and determine if any corrective actions are needed (Atkinson et al., 1997). Thus, with the use of PM, companies can identify problems in the processes (e.g., bottlenecks, non-value adding activities) in their action plans (e.g., penetration in a new market segment) and in their strategy and they can perform corrective actions (Parker, 2000). Moreover, PM can aid in understanding how the business works and, consequently, enhances decision making both at the top management and at the operating level (Argyris, 1977). In addition, PM influences the behavior of employees and thus it has been used for many years as a means to communicate decision-relevant information to people inside the organization (Atkinson et al., 1997). Finally, PM can be used to motivate employees, increase accountability, and reward certain behaviors and results (Neely et al., 1996). 2.2. Financial and Non-Financial Performance Measures Performance measures are usually divided into two main groups: financial measures and the non-financial (or operational) measures (Ittner and Larker, 2002, 2003). Financial measures are grounded on the economic state of a company; they incorporate traditional measures such as profits, revenues, costs, financial margins, cash flow, and other more recent measures such as economic value added (EVA), cash flow return on investment (CFROI) etc. These measures first became popular in the industrial-age competition (19th and most of the 20th century) when the economy was mostly dependent on tangible assets (Chandler, 1990). However, during the second half of the 20th century, intangible assets came into play and started dominating the source of competitive advantage (Nanni et al., 1988; Rappaport, 1999). Companies had to turn to non-financial measures aiming to quantify the organizational performance related to: customers (e.g., customer satisfactionretention-acquisition) employees (e.g., employee satisfaction), innovation, quality, culture etc. (Jensen, 2001; Kanji, 2002). These measures have been further broken down into hard measures which are easily quantifiable (e.g., customer acquisition, number of complaints, etc.) and soft measures that are difficult to quantify, such as satisfaction (Amir and Lev, 1996).
March 15, 2010
702
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
M. Argyropoulou et al.
Nonetheless, bearing in mind that the two most desired characteristics of performance measures are completeness (i.e., the measure captures the “whole truth” about performance) and controllability (meaning that the measure is only influenced by elements under the unit’s control (Heneman et al., 2000)), it is clear that non-financial performance measures present many difficulties. The difficulty and subjectivity inherent in measuring non-financial performance intertwined with the organizational importance of financial performance and the need to focus all efforts on the ultimate goal of shareholder satisfaction, has lead some researchers to suggest that performance measures should be purely financial (Kurtzman, 1997; Newman, 1998). The traditional financial performance measures have been widely criticized (Brown and Laverick, 1994; Bourne et al., 2000; Banker et al., 2004) for: focusing on the past (Kaplan and Norton, 1992), being centered on short-term improvement (Banks and Wheelwright, 1979; Hayes and Garvin, 1982), not being aligned with strategy and goals (Gregory, 1993; Kaplan and Norton, 1992), focusing on local optimization (Eccles, 1991; Skinner, 1971), not being externally focused (Kaplan and Norton, 1992), arriving too late for any action (Johnson and Kaplan, 1987; Kanji and Sa, 2002; Kaplan, 1983; Kaplan and Norton, 1992), not aligning staff decisions and actions (Banker et al., 2004; Parker, 1979) and being too aggregated (Johnson and Kaplan, 1987). Moreover, according to Dearden (1969), financial measures can be manipulated in order to achieve better short-term performance at the expense of the organization’s long-term goals (Dearden, 1969). Many authors realized the value of both financial and non-financial performance and advocated the view of complementarity, whereby financial performance needs to be complemented by non-financial performance in order to derive valuable conclusions for the company and the employees (Ahn, 2001). Furthermore, it has been suggested and proven that the importance placed on the use of multiple performance measures is higher in organizations facing increased competition. 2.3. Supply Chain Management Performance Measures The need for PMS at different levels of decision making either in the industry or service contexts is undoubtedly not something new (Bititci et al., 2005). Many methods and techniques have been suggested over the years for SCM evaluation. Traditional methods focus on well-known financial measures, such as the ROI, net present value (NPV), the internal rate of return (IRR), and the payback period. Unfortunately, evaluation methods that rely on financial measures are not well suited for the newer generation of SCM applications. For this reason, in 1996, the Supply Chain Council (SCC) developed the Supply Chain Operations Reference (SCOR) which provides a common process-oriented language for communicating among supply-chain partners in the following decision areas: plan, source, make, and deliver. It contains 12 metrics which fall into four categories: (a) Delivery reliability metrics: delivery performance, fill rate, order fulfilment leads time, perfect
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
Measuring Supply Chain Performance in SMES 703
order fulfilment, (b) Flexibility and responsiveness metrics: supply chain responsiveness, production flexibility, (c) Cost metrics: total logistics management cost, value-added employee productivity, warranty costs, and (d) Assets metrics: cashto-cash cycle time, inventory days of supply, asset turns. The Oliver Wight ABCD 20 point checklist is another guide used by manufacturing professionals to improve their company’s performance (Wight, 1993). It addresses the following business areas: strategic planning, people and team systems, product development, continuous improvement, planning, and control. However, as Chan and Qi (2002) argue, the existing PM methods fail to provide significant assistance in supply chain development and an effective method is lacking. Most of these methods lack a balanced approach. As suggested by Maskell (1991), companies should bear in mind that while financial PMs are important for strategic decisions and external reporting, day-to-day control of manufacturing and distribution operation is better handled with non-financial measures (Maskell, 1991). This being the background, Gunasekaran et al. (2001) developed a framework for respectively measuring the performance from strategic, tactical, and operational levels in supply chains; this framework mainly deals with supplier, delivery, customer service, and inventory/logistics costs. Many other PMS have been reported in the literature (Bititci and Nudurupati, 2002; Chan and Qi, 2003a,b; Chand et al., 2005). In a recent study, Bhagwat and Sharma (2007a,b) propose a balanced approach for SCM performance evaluation. The authors used the Balanced Scorecard (Kaplan and Norton, 1992) four perspectives and developed a new framework structurally similar to BSC with corresponding metrics that reflect SCM strategy and goals. 2.4. The SMEs Sector and PM by SMEs SMEs cover a wide spectrum of industries and play an important role in both developed and developing economies. The definition of SMEs varies from country to country. According to the definition proposal of the European Council, enterprises with fewer than 250 employees and less than 20 million Euro turnovers per year can be referred to as SMEs (European Council, 1996). Existing literature suggests that SMEs may be differentiated from larger enterprises by a number of key characteristics which are generally described as follows: personalized management with little devolution of authority (Addy et al., 1994), severe resource limitations in terms of management and manpower, reliance on small number of customers, flat and flexible structures, high potential for innovation (Berry, 1998; Bhagwat and Sharma, 2007; Burns and Dewhurst, 1996; Ghobadian and Gallear, 1997), informal strategies, and CEO involvement in operational decisions (Huin, 2004). Before concluding this section, it should be mentioned that the literature review emphasized poor use of PM in SMEs (Garengo and Bititci, 2007). On the other hand, research studies indicate that PMS can play a key role in supporting managerial growth in SMEs (Biazzo and Bernardi, 2003; Garengo et al., 2005). Some of the
March 15, 2010
704
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
M. Argyropoulou et al.
reasons mentioned in the literature include shortage of human and capital resources, lack of strategic planning, misconception of the benefits of PM, and an overall technical orientation (Barnes et al., 1998; Hudson et al., 2001a, b). This chapter recommends a simple approach for SCM PM that managers can implement to evaluate their daily operations. The next section describes briefly the original BSC framework to put the study in context, and explains the reasons for the selection of the BSC approach instead of the other SCM evaluation tools.
3. Why the BSC Approach? The BSC can be described as a management tool that claims to incorporate all quantitative and abstract measures of true importance to the enterprise (Kaplan and Norton, 1996b). According to Kaplan and Norton (1996a), the BSC provides managers with the instrumentation they need to navigate to future competitive success. They also claim that “. . . it addresses a serious deficiency in traditional management systems: their inability to link a company’s long-term strategy with short-term actions.” A brief analysis of the four perspectives is presented below whereas the adaptation of the BSC to SCM evaluation is discussed in the subsequent paragraphs. The financial perspective represents the long-term financial objectives for growth and productivity and incorporates the tangible outcomes of the strategy in traditional financial terms (EVA, profit margins etc); this is the perspective that appeals mostly to shareholders (Kaplan and Norton, 2004a). The customer perspective defines the value proposition that the organization will apply in order to satisfy its customers and represents the way in which intangible assets create value (Kaplan and Norton, 2004a). Thus, the measures that are selected should measure both the value that is derived for the customer (time, quality, and cost) and the outcomes that result (customer satisfaction, retention, market share (Kaplan and Norton, 1992)). The internal business process perspective is concerned with the processes required to provide the value expected by the customers and the relevant measures are time-to market, defects, new products etc. (Kaplan and Norton, 2004b). The learning and growth perspective focuses on the intangible assets: mainly on the internal skills and capabilities that are required to support the internal processes (Kaplan and Norton, 2004a). It refers to the company’s employees; their training, skills, cultural attitudes, and the relevant measures are employee retention, training efficiency, etc. Obviously, each one of these four perspectives has to be in accordance with the business strategy of the organization. Through monitoring metrics and maintaining equilibrium between all perspectives, management can control the strategy implementation process, not just to realize short-term financial outcomes, but also to develop long-term competitive capabilities (Papalexandris et al., 2004).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
Measuring Supply Chain Performance in SMES 705
3.1. Criticism of the BSC Many academics and researchers have identified serious disadvantages regarding the implementation of the BSC: it is a simplistic approach and the limited number of performance measures cannot provide a holistic representation of the organization (Hoque and James, 2000). It claims to represent the performance of an organization but some measures are overlooked (Kanji and Sa, 2002). Examples include suppliers, partners, and competitors. Since the selected measures are chosen in such a way so as to be aligned with the strategy of a company at any given time, there is a need for frequent validation of the measures used (Papalexandris et al., 2004). Nonetheless there are many other researchers in favor of the BSC model due to its salient benefits: it can foster better PM and enhance decision making (Lipe and Salterio, 2002) by linking financial and non-financial measures in a single document (Kanji and Sa, 2002). It allows for better performance management (Epstein and Manzoni, 1998) by helping set targets in alignment with the company’s strategy (Braam and Nijssen, 2004). Finally, it is a concept that is easily understood and used which means that it can be communicated to all departments (Ahn, 2001). 4. Research Methodology and Framework The “action research” strategy has been followed as the whole project focused on action, promoting change in the company (Cunningham, 1995; Marsick and Watkins, 1997) and involved academics with a genuine concern for the expected findings (Eden and Huxham, 1996). 4.1. Adaptation of the Standard BSC to SCM The SCM framework that has been applied in company A was based on the metrics that Bhagwat and Sharma (2007a, b) proposed for SCM performance evaluation. The authors used the Balanced Scorecard (Kaplan and Norton, 1992, 1996a, b) four perspectives and developed a new framework structurally similar to BSC with corresponding metrics that reflect SCM strategy and goals. Each of the four perspectives was translated into corresponding metrics and measures that reflect SCM goals and objectives (see Table 1). In addition, the authors explained that the following steps are recommended for linking the BSC to SCM objectives. (1) Create awareness for the concept of BSC in the organization. (2) Collect data on the following items: — Goals and objectives related to corporate, business, and SCM strategy — Traditional metrics already in use for SCM evaluation — Potential metrics related to four perspectives of BSC (3) Determine the company-specific objectives and goals of the SCM function for each of the four perspectives.
March 15, 2010
706
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
M. Argyropoulou et al.
Table 1. Performance Metrics for the Financial and Customer Perspectives (Based on Metrics Suggested by Bhagwat and Sharma (2007)). Performance metrics
Year performance (in %)
Target (in %)
Financial perspective metrics Gross margin Productivity Import cost Inventory holding cost Cost of obsolete stock
+1 +5 +10 +15 +30
+4 +10 −10 −5 −10
Customer perspective metrics Range of products and services Delivery lead time Defect-free deliveries Revenue per customer
+50 −10 +10 N/A
+50 −50 +15
+20 +15 −5 N/A
+10 +5 −10
+5 N/A
−10
+5 N/A N/A
+5
Internal process metrics Category A No. of import invoices No. of sales invoices Time for suppliers to respond to quality problems Supplier’s defect-free deliveries Category B No. of information-reporting errors Sales forecasting accuracy Learning and growth metrics Technical support staff training hours Administrative staff training hours Employee satisfaction
(4) Receive comments and feedback on the balanced SCM scorecard from the management, and revise it accordingly. (5) Achieve a consensus on the balanced SCM scorecard that will be used by the organization. (6) Communicate both the balanced SCM scorecard and its underlying rationale to all stakeholders. 4.1.1. Implementation This section describes the practical example of the use of the Bhagwat and Sharma (2007) SCM scorecard in a Greek SME (Bhagwat and Sharma, 2007a, b).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
Measuring Supply Chain Performance in SMES 707
Company Background Company A, is one of the leading suppliers of In Vitro Diagnostics (IVD) (laboratory diagnostics) in Greece. These products are used in diagnostic laboratories, by specialized staff, for the diagnosis of diseases and/or the assessment of the health status of tested individuals. The company operates as a retailer in industrial, B2B markets at both sides of its supply chain. It purchases and imports finished products from its suppliers, who are manufacturing organizations located in the United States and various European Union countries. Its immediate customers are the diagnostic laboratories of public and private hospitals. Following the introduction of new product lines, the company decided to upgrade its inventory/warehousing management processes so that they could manage their stock-keeping units. In this context, top managers announced their decision to re-engineer and automate some of their complex processes and implement an ERP system, which would help the company improve performance and strengthen their position in the health industry. To achieve these goals, the company retained the services of two academics. Following some meetings with top managers it was crucial to measure the SCM performance prior to software adoption in order to formulate a consistent system for assessing the overall performance after the ERP implementation. The BSC approach was finally adopted for the following reasons: First, it is the most widely accepted out of all PM tools (Marr and Schiuma, 2003). Second, it has been recommended as a suitable technique for SCM PM in similar SMEs in an emerging economy (Bhagwat and Sharma, 2007a; Sharma and Bhagwat, 2007). Third, the BSC approach could be used later on for the evaluation of the ERP system performance serving two purposes: (a) consistency before and after ERP implementation and (b) a fast and rigorous methodology which has been successfully adopted by other researchers (Chand et al., 2005). Finally, it was found to be the most appropriate due to its simplicity, feasibility, and ease of implementation. The next paragraph discusses the actual project deployment phases. Phase I was the project preparation comprising: (a) project team formation and (b) project planning and communication. Given that Company A is a rather small/medium-sized company, it was fairly straightforward for top management to form a team which included the three executives, (from finance, marketing, and warehousing departments) as well as the two academics. The team would be responsible for the entire project including monitoring and communication. To gain the necessary commitment from everyone in the company, it was important to communicate it effectively in a kick-off presentation. The goals and objectives as well as the benefits that the company would reap from the SCM evaluation were extensively analyzed. Phase II involved the selection of the measures for monitoring SCM objectives and people responsible for them. This was deemed significant for the identification of the specific metrics from the proposed set that could be familiar, understood, and thus useful for the company.
March 15, 2010
708
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
M. Argyropoulou et al.
Phase III included setting targets and determining measurement frequency. It was decided that measures were to be taken every 6 months in order to incorporate meaningful financial changes. A compensation scheme for target accomplishment was also developed and accepted by the CEO. Phase IV addressed the actual SCM evaluation and analysis of results. Table 1 illustrates the findings from the evaluation of SCM prior to ERP adoption describing the measures by the end of 2007. A discussion is provided in the next section. All phases of the SCM scorecard implementation were completed successfully. The University members acted mainly as facilitators and consultants, guiding the managers and employees to identify and establish necessary measures. The following section reviews the results and the overall undertaking. 5. Discussion on Performance Indicators — Managerial Implications The financial metrics selected for SCM performance summarized the consequences of decisions already taken, and showed that Company A did not perform as projected. The ultimate financial objective of the company was to reduce the inventory holding cost which had increased due to the introduction of new product lines and many units were held in stock. The latter was expected to be handled in a more efficient manner after the introduction of the ERP system. The customer metrics reflected the company’s strategy which focused on supreme quality and customer satisfaction. By the end of 2007, Company A was in a position to deliver orders within 24 h throughout the country, under strict observance of the shipment conditions required for each product (e.g., temperature −80 or −20◦ C). Inventory control and warehousing policy were set in strict conformity with the strategic marketing directions. However, the company realized that they had to establish a lower customer service level in order to reduce the vast amount of stock-keeping units. The internal business process metrics were split into two categories: the first category concerned the company’s ability to meet demand and customer needs; the second was linked to the company’s ability to manage the first category in an efficient manner. It became apparent that a consistent marketing strategy was missing. The managers did their best to satisfy the customers but at a very high sales and importing cost. The learning and growth metrics included the company’s ability to create longterm growth. Following the overall customer-centric policy, the technical department comprised competent scientists who were continually trained to follow new technologies. On the other hand, the remaining staff had much fewer man-hours of training for their job requirements. Adding the learning perspective helped managers realize that some of their people were neglected, and they decided to increase the training hours for the administrative staff. In addition, they took steps for job
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
Measuring Supply Chain Performance in SMES 709
enrichment and employee satisfaction. Finally, the managers arranged quarterly meetings to assess the attainment of targets and the review of the SCM performance. The project forced the adoption of new metrics that were not used earlier such as accuracy of forecasting techniques. This issue brought to light a major drawback in the marketing strategy. In their attempt to satisfy demand, marketing people would inflate the numbers without considering the implications on the tied-up capital and the warehousing operations. The company needed major re-engineering in the supply chain operations. They also had to adopt a new system to monitor the thousands of stock-keeping units and a new order-fulfilling system to reduce importing and ordering costs. The ERP system adoption could be a solution by automating and integrating the necessary processes. 6. Conclusion — Limitations — Future Research Directions This work presented an actual application of the Bhagwat and Sharma (2007a,b) SCM scorecard (Bhagwat and Sharma, 2007a, b) at a Greek SME aided by a university team. Our research, while agreeing with the usefulness of the BSC approach, also highlighted one its identified shortcomings: as Jensen (2001) points out, unless all measures improve simultaneously, which will seldom be the case, it will be difficult to assess overall performance based on multiple criteria (Ittner, 2003). When managers need to improve the performance of so many different business operations, trade-offs cannot be easily decided. This complexity might encourage future partial application of the BSC approach in which case the benefits that are reaped will be limited. On the other hand, the SCM scorecard implementation brought many benefits to Company A. First, marketing people realized that although customer satisfaction is positively related to financial results, it is possible to have satisfied customers and go bankrupt (Foster and Gupta, 1999). The project facilitated interunderstanding and revealed the weaknesses of the marketing strategy. Another intangible benefit stemmed from the active involvement of some employees who spoke their mind and expressed suggestions for the future ERP implementation. The real contribution, however, lay in the learning process and the profound exploration and understanding of specific SCM processes and goals that were translated into metrics, which, in turn, became performance indicators. This could reduce the expected resistance to the future re-engineering of SCM processes. The project contributed towards better relationships between marketing and warehousing employees as they understood that meeting individual department targets does not necessarily mean overall success. Our general comment is that the framework may act as a guidance tool for day-to-day operations and tactical operations inherent in SCM. Finally, the BSC approach could be further improved to steer the company throughout the ERP implementation project; this is very important for SMEs that do not have the resources to address every critical success factor, as they should,
March 15, 2010
710
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
M. Argyropoulou et al.
and they are forced to make implementation compromises according to resource constraints (Sharma and Bhagwat, 2006; Sun et al., 2005). References Addy, C, J Pearce and J Bennet (1994) Performance measures in small manufacturing enterprises; are firms measuring what matters? 10th National Conference on Manufacturing Research (Proceedings), pp. 110–114. Loughborough: Taylor & Francis Publishing. Ahn, H (2001). Applying the balanced scorecard concept: An experience report. Long Range Planning, 34, 441–461. Amir, E and B Lev (1996). Value-relevance of non-financial information: The wireless communication industry. Journal of Accounting and Economics, 22, 3–30. Argyris, C (1977). Organisational learning and management information systems. Accounting, Organisations and Society, 2, 113–123. Atkinson, AA, JH Waterhouse and RB Wells (1997). A stakeholder approach to strategic performance measurement. Sloan Management Review, 38, 25–37. Banker, RD, H Chang, SN Janakiraman and C Kostans (2004). Analysing the underlying dimensions of firm performance metrics. European Journal of Operational Research, 154, 423–436. Banks, RL and SC Wheelwright (1979). Operations versus strategy: Trading tomorrow for today. Harvard Business Review, Vol. 57, May–June, 112–120. Barnes, M, T Dickinson, L Coulton, S Dransfield, J Field, N Fisher, I Saunders and D Shaw (1998). A new approach to performance measurement for small to medium enterprises. Proc. of International Conference on Performance Measurement Theory and Practice, Cambridge, MA, 14–17 July 1998. Berry, M (1998). Strategic planning in small and high tech companies. Long Range Planning, 31, 455–466. Bhagwat, R and MK Sharma (2007a). Performance measurement of supply chain management: A balanced scorecard approach. Computers and Industrial Engineering, 53, 43–62. Bhagwat, R and MK Sharma (2007b). Performance measurement of supply chain management using analytical hierarchy process. Production Planning & Control, 18, 666–680. Biazzo, S and G Bernardi (2003). Organisational self-assessment options: A classification and a conceptual map for SMEs. International Journal of Quality & Reliability Management, 20, 881–900. Bititci, US and SS Nudurupati (2002). Using performance measurement to derive continuous improvement. Manufacturing Engineer, 81, 230–235. Bititci, US, S Cavalieri and G Cieminski (2005). Implementation of performance measurement systems: Private and public sectors. Editorial, Production Planning and Control, 16, 99–100. Bourne, M, J Mills, M Wilcox, A Neely and K Platts (2000). Designing, implementing and updating performance measurement systems. International Journal of Operations & Production Management, 20, 754–771. Braam, GJM and EJ Nijssen (2004). Performance effects of using the balanced scorecard: A note on the Dutch experience. Long Range Planning, 37, 335–349.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
Measuring Supply Chain Performance in SMES 711
Brown, DM and S Laverick (1994). Measuring corporate performance. Long Range Planning, 27, 89–98. Burns, P and J Dewhurst (1996). Small Business and Entrepreneurship, 2nd Ed. London: Macmillan Press. Chan, FTS and HJ Qi (2002). A fuzzy basis channel-spanning performance measurement method for supply chain management. Proceedings of The Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 216, 115–1167. Chan, FTS and HJ Qi (2003a). An innovative performance measurement method for supply chain management. Supply Chain Management — An International Journal, 8, 209–223. Chan, FTS and HJ Qi (2003b). Feasibility of performance measurement systems for supply chains: A process-based approach and measures. Integrated Manufacturing Systems, 14, 179–190. Chan, FTS, HK Chan and HJ Qi (2006). A review of performance measurement systems for supply chain management. International Journal of Business Performance Management, 8, 110–131. Chand, D, G Hachey, J Hunton, V Owhoso and S Vasudevan (2005). A balanced scorecard based framework for assessing the strategic impacts of ERP systems. Computers in Industry, 56, 558–572. Chandler, AD (1990). Scale and Scope: The Dynamics of Industrial Capitalism. Cambridge, MA: Harvard University Press. Cunningham, JB (1995). Strategic considerations in using action research for improving personnel practices. Public Personnel Management, 24, 515–529. Dearden, J (1969). The case against ROI control. Harvard Business Review, May–June, 47, 124–135. Eccles, RG (1991). The performance measurement manifesto. Harvard Business Review, January–February, 131–137. Eden, C and C Huxham (1996). Action research for management research. British Journal of Management, 71, 75–86. Epstein, M and JF Manzoni (1998). Implementing corporate strategy: From tableaux de board to balanced scorecard. European Management Journal, 16, 190–203. Foster, G and M Gupta (1999). The customer profitability implications of customer satisfaction. Working paper, Stanford University and Washington University, St. Louis, MO. Garengo, P and US Bititci (2007). Towards a contingency approach to performance measurement: An empirical study in Scottish SMEs. International Journal of Operations and Production Management, 27, 802–825. Garengo, P and S Biazzo, A Simonetti and G Bernandi (2005). Performance measurement systems in SMEs:A review for a research agenda. International Journal of Management Reviews, 7, 25–47. Ghobadian, A and D Gallear (1997). TQM and organisational size. International Journal of Operations and Production Management, 17, 121–163. Gregory, M (1993). Integrated performance measurement: A review of current practice and emerging trends. International Journal of Production Economics, 30 and 31, 281–296. Gunasekaran, A, C Patel and E Tirtiroglu (2001). Performance measures and metrics in a supply chain environment. International Journal of Production and Operations Management, 21, 71–87.
March 15, 2010
712
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
M. Argyropoulou et al.
Hayes, RH and DA Garvin (1982). Managing as if tomorrow mattered. Harvard Business Review, May–June, 70–79. Heneman, RL, GE Ledford and MT Gresham (2000). The Effects of Changes in the Nature of Work on Compensation, San Francisco CA: Jossey-Bas. Hoque, Z and W James (2000). Linking balanced scorecard measures to size and market factors: Impact on organisational performance. Journal of Management Accounting Research, 12, 1–17. Hudson, M, J Lean and PA Smart (2001). Improving control through effective performance measurement in SMEs. Production Planning and Control, 12, 804–813. Hudson, M, PA Smart and M Bourne (2001). Theory and practice in SME performance measurement systems. International Journal of Operations & Production Management, 21, 1096–1115. Huin, SF (2004). Managing deployment of ERP systems in SMEs using multi-agents. International Journal of Project Management, 22, 511–517. Ittner, CD and DF Larker (2002). Determinants of performance measure choices in worker incentive plans. Journal of Labour Economics, 20, 58–90. Ittner, CD and DF Larker (2003). Coming up short on financial performance measurement. Harvard Business Review, 81 November, 88–95. Jensen, MC (2001). Value maximisation, shareholder theory and the corporate objective function. European Financial Management, 7, 297–317. Johnson, HT and RS Kaplan (1987). Relevance Lost: The Rise and Fall of Management Accounting. Boston: Harvard Business School Press. Kanji, GK and PM Sa (2002). Kanji’s business scorecard. Total Quality Management, 13, 13–27. Kaplan, RS (1983). Measuring manufacturing performance:A new challenge for managerial accounting research. The Accounting Review, 18, 686–705. Kaplan, RS and DP Norton (1992). The balanced scorecard measures that drive performance. Harvard Business Review, 70, 71–80. Kaplan, RS and DP Norton (1996a). Translating Strategy Into Action — The Balanced Scorecard. Boston: Harvard Business School Press. Kaplan, RS and DP Norton (1996b). Using the balanced scorecard as a strategic management system. Harvard Business Review, 74, 75–85. Kaplan, RS and DP Norton (2004a). Measuring the strategic readiness of intangible assets. Harvard Business Review, 82, 52–63. Kaplan, RS and DP Norton (2004b). Strategy Maps: Converting Intangible Assets into Intangible Outcomes. Boston: Harvard Business School Press. Kurtzman, J (1997). Is your company off course? Now you can find out why. Fortune, 135, 128–130. Lebas, MJ (1995). Performance measurement and performance measurement. International Journal of Production Economics, 41, 23–35. Lipe, MG and S Salterio (2002). A note on the judgmental effects of the balanced scorecard’s information organisation. Accounting, Organisations and Society, 27, 531–540. Maskell, BH (1991). Performance Measurement for World Class Manufacturing. Portland, OR: Productivity Press. Marr, B and G Schiuma (2003). Business performance measurement — Past, present and future. Management Decision, 41, 680–687.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
Measuring Supply Chain Performance in SMES 713
Marsick, VJ and KE Watkins (1997). Case study research methods. In Human Resource Development Research Handbook, Swanson, RA and EF Holton (eds.), pp. 138–157. San Francisco, CA: Berret-Koehler. Nanni, AJ, JG Miller and TE Vollmann (1988). What shall we account for? Management Accounting, 69, 42–48. Neely, A, M Gregory and K Platts (1995). Performance measurement system design: A literature review and research agenda. International Journal of Operations and Production Management, 15, 80–116. Neely, A, J Mills, K Platts, M Gregory and H Richards (1996). Performance measurement system design: Should process based approaches be adopted? International Journal of Production Economics, 46 and 47, 423–431. Newman, G (1998). The absolute measure of corporate excellence. Across the Board, 28, 10–12. Oliver, W (1993). The Oliver Wight ABCD Checklist, 4th Ed., New York, NY: John Wiley & Sons. Parker, C (2000). Performance measurement. Work Study, 49, 63–66. Parker, LD (1979). Divisional performance measurement: Beyond an exclusive profit test. Accounting and Business Research, 36, 309–319. Papalexandris,A, G Ioannou and GP Prastacos (2004). Implementing the balanced scorecard in Greece: A software firm’s experience. Long Range Planning, 34, 351–366. Rappaport, A (1999). New thinking on how to link executive pay to performance. Harvard Business Review, 91–101. Rose, KH (1995). A performance measurement method. Quality Progress, 28, 63–66. Sharma, MK and R Bhagwat (2006). Performance measurements in the implementation of information systems in small and medium-sized enterprises:A framework and empirical analysis. Measuring Business Excellence, 10, 8–21. Sharma, MK and R Bhagwat (2007). An integrated BSC-AHP approach for supply chain management evaluation. Measuring Business Excellence, 11, 57–68. Skinner, W (1971). The anachronistic factory. Harvard Business Review, 49, 61–70. Sun, AYT, A Yazdani and JD Overend (2005). Achievement assessment for enterprise resource planning (ERP) system implementations based on critical success factors (CSFs). International Journal of Production Economics, 98, 189–203. Waggoner, DW, AD Neely and P Kennerley (1999). The forces that shape organisational performance measurement systems: An interdisciplinary review. International Journal of Production Economics, 60 and 61, 53–60.
Biographical Notes Maria Argyropoulou is an Associate Lecturer at Strathclyde Business School, Scotland and a Research Associate at the Athens University of Economics and Business, Greece. She holds a BSc degree in physics and mathematics from Kapodistriako University of Athens, Greece, an MSc in decision science from the Athens University of Economics and Business and an MBA from the Strathclyde University, Scotland. She is also a PhD candidate in Brunel Business School, UK
March 15, 2010
714
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
M. Argyropoulou et al.
and her research interests focus on management science and on IT systems implementation and evaluation. She has worked for Greek and international companies for more than 10 years specializing in property valuation, business process reengineering and enterprise systems implementation. Her work has been published in various journals and conference proceedings. Milind Kumar Sharma is an Associate Professor and has taught many subjects related to production and industrial engineering and operations management. Prior to joining the Department of Production & Industrial Engineering, M.B.M. Engineering College, J.N.V. University, Jodhpur in 1998, he served in the industry for four years. He has been awarded research projects under the Career Award forYoung Teacher Scheme by the All India Council for Technical Education (AICTE), Department of Science and Technology cost and University Grants Commission (UGC), New Delhi, India. His areas of research interests include management information system, performance measurement, supply chain management, and small business development. He has published research papers in Production Planning and Control, Computers & Industrial Engineering, International Journal of Productivity and Quality Management, Journal of Manufacturing Technology Management, International Journal of Globalization and Small Business, International Journal of Enterprise Network Management, and Measuring Business Excellence. He has also reviewed a number of research articles for many reputed international journals. Currently, he is editing one international journal on performance measurement. Rajat Bhagwat is Professor in the Department of Mechanical Engineering, M.B.M. Engineering College, J.N.V. University, Jodhpur. He has also worked as a research assistant in the University of Hong Kong, Hong Kong. His areas of research interests include information system, simulation, and modeling and control of flexible manufacturing system. He has working experience in industrial projects in the areas of production, planning and control, capacity expansion, and layout planning. He has been awarded a postdoctoral fellowship at the University of Bordeaux, France. He has a number of publications in international journals and conferences. Themistokles Lazarides is a doctoral candidate at Democritus University of Thrace (Greece). He is an economist, and currently works as a lecturer in the Department of Applied Informatics in Administration and Economy at the Technological Institute of West Macedonia, Grevena, Greece. His research interests focus on corporate governance and ES systems design and implementation. His work has appeared in Corporate Governance: An International Journal of Business in Society, Information Management & Computer Security and in various conferences. Dimitrios N. Koufopoulos (BSc, MBA, PhD, MCMI, FIMC) is lecturer at the Brunel Business School. His work has appeared in the European Marketing Academy Conference, British Academy of Management and Strategic
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
Measuring Supply Chain Performance in SMES 715
Management Society Proceedings and in various journals like Long Range Planning Journal, Journal of Strategic Change and Journal of Financial Services Marketing, Management Decision, European Business Review, and Corporate Board. His research interests are in strategic planning systems, top management teams, corporate governance and corporate strategies. George Ioannou is an Associate Professor at the Department of Management Science and Technology of the Athens University of Economics and Business, the Acting Director of the International MBA Program, and Head of the Operations & ERP Systems Center. He was an Assistant Professor at the Department of Industrial and Systems Engineering of Virginia Tech, directing the Manufacturing Systems Integration Laboratory. He received his diploma in Mechanical Engineering from the National Technical University of Athens, his MSc/DIC in Industrial Robotics and Manufacturing Automation from Imperial College, London (on an SERC scholarship), and his PhD from the Institute for Systems Research of the University of Maryland at College Park, USA (on a National Science Foundation scholarship). His research focuses on the analysis, design, and optimization of complex systems. He is the recipient of the Microsoft Excellence in Education Award, has been honored by many Teaching Excellence Awards, and is a member of the Editorial Board of Production Planning & Control.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch29
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
Chapter 30
Information Sharing in Service Supply Chain SARI UUSIPAAVALNIEMI Department of Industrial Engineering and Management, University of Oulu, P.O. Box 4610, 90014 University of Oulu, Oulu, Finland [email protected] JARI JUGA∗ and MAQSOOD SANDHU† Department of Management and Entrepreneurship, University of Oulu, P.O. Box 4600, 90014 University of Oulu, Oulu, Finland ∗ [email protected] † [email protected]
The aim of the present study is to achieve an understanding of the dynamic process of information sharing in service supply chain context, which is yet quite an unexplored theme in supply chain management (SCM). The conceptual framework is followed by a case study of the steel industry, which describes and analyzes the information sharing related to a service supply chain in maintenance operations. The service providers are offering business services to a large international steel manufacturer. Empirical findings were gathered through a qualitative study by using personal interviews and workshops with the key actors involved in the maintenance operations. Two different types of service providers are included in the study (maintenance-related planning and mechanical maintenance). A description of the kinds of information, phase, and the information sharing medium in the case supply chain dyads is provided. We consider six aspects that define the capabilities for information sharing in a supply chain: processes, information characteristics, information technology, practices/procedures for information sharing collaboration, and time aspects. These elements together with the needs for information sharing define the level of information integration in supply chain dyads. The level of information integration is then assessed in the two types of dyads. To develop information sharing in the case service supply chain, a more structured approach with commonly defined and documented procedures for information sharing and a shared centralized maintenance information system is suggested. The study justifies the need for information integration in business services and contributes to the limited research on information sharing in the context service supply chains. Keywords: Information sharing; service supply chain; maintenance operations.
717
March 15, 2010
718
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
S. Uusipaavalniemi et al.
1. Introduction Information is the key element in guiding both the product and service. We can define it as “if input data are received and they have some value for someone in some context and if they can be magnified and processed, then they are called information.” The information becomes a component of knowledge when it is analyzed critically and when its underlying structure is understood in relation to other pieces of information and conceptions (Sandhu and Gunasekaran, 2004). Information sharing is crucial to the success of operational problems. Information sharing in the supply chain management (SCM) context also plays a critical role and the need for more research on the function of information sharing has been brought up for instance by Moberg et al. (2002). Most of the research concerning information sharing in supply chains is related to inventory, forecasting, orders, and production plans (e.g., Lee and Whang, 2000). Yet, there is also other critical information to be exchanged between the supply chain members that have gained less attention in supply chain research thus far. The majority of supply chain research focuses on the manufacturing sector (e.g., Sengupta et al., 2006; van der Vaart and van Donk, 2008). Moreover, supply chain integration research mainly deals with the links between manufacturers and suppliers, customers or distributors. Service providers are often left out of the scope. According to Ellram et al. (2004), services have largely been ignored in supply chain research. The need for more research concerning service supply chain practices has also been highlighted by Sengupta et al. (2006) and Closs and Savitskie (2003). Service outsourcing is growing very rapidly (e.g., Allen and Chandrashekar, 2000), and thus the need to understand and manage the service supply chain will gain more importance in the future (Ellram et al., 2004). Companies are still often managing services as they could materials and using the same information systems (e.g., enterprise resource planning [ERP]) for services and materials, although services have special characteristics and service delivery differs substantially from material deliveries. The need to recognize differences between manufacturing and services and develop models and empirical knowledge that especially focus on services has been highlighted, e.g., by Nie and Kellogg (1999) and Baltacioglu et al. (2007). Services are often described as being intangible, inseparable, heterogeneous, and perishable (Nie and Kellogg, 1999). Services are also usually more difficult to visualize and to measure than physical products. Business services are defined as services delivered to organizations (e.g., Homburg and Garbe, 1999). Business services often have to be customized to meet the purchasing organizations’ needs (Fitzsimmons et al., 1998). In a service supply chain (Ellram et al., 2004; Sengupta et al., 2006), human labor forms a crucial component of the value delivery process. It is difficult to standardize and centralize procedures as variation and uncertainties in the output are high due to the high degree of human involvement. Services cannot be inventoried and thus the focus in the service supply
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
Information Sharing in Service Supply Chain
719
chain is on managing capacity, flexibility of resources, information flows, service performance, and cash flow. The performance of a service provider is difficult to measure, the specification of a desired service is less precise than for a product and it is difficult to judge if the services being provided are meeting expectations (e.g., ◦ Fitzsimmons et al., 1998; Ellram et al., 2004; Ellram et al., 2007). Ahlstr¨om and Nordin (2006) found out in their study that most of the knowledge required for delivering a service was embedded in employees’ minds and that experience was difficult to document. They conclude that this kind of tacit knowledge is presumably more evident and significant in many service businesses compared to manufacturing businesses. The proportion of services even in the operations of traditional manufacturing companies is increasing (a phenomenon called servitization). Manufacturers become service providers or solution providers. For example, for several large Finnish manufacturers, such as Metso, Kone, and W¨artsil¨a, the maintenance business related to their products, i.e., industrial services that they sell to their customers, constitutes already a third or even half of the annual turnover (information from the annual reports, 2007) of the companies. Business services bought by traditional heavy industries (such as paper, steel, etc.) form only a small proportion of the annual turnover of the companies, as the material costs are the major item of expenditure (e.g., Rautaruukki Annual Report, 2007). But when one considers the bought business services as a percentage of the total operation and maintenance costs of the production facilities, they constitute a substantial share. In Finland, the average maintenance costs are about 5.5% of company turnover, but they can account for up to 25% of company turnover (Komonen, 2002). The Finnish industry invests yearly in total about 3.5 billion euros on maintenance of machines (Salin, 2007). In the Finnish maintenance service provider market, there are several large actors (ABB services, YIT, and Maintpartner as the three biggest) and a number of small- or medium-sized service providers who often also import or manufacture the production equipment themselves (Salin, 2007). To address this kind of integration of services and products, it is essential to gain a deeper understanding and knowledge of managing business services particularly in the supply chain context. Therefore, the driving force behind this study is the discrepancy between the realities of the firms engaged in services and the available studies covering SCM and information sharing systems. The effort is to elaborate on some thoughts and views on understanding information sharing, which integrates the business services in SCM. In the next section of the present study, we discuss the theoretical background of the study and highlight previous research and gaps in this new area. Section 3 describes methodological considerations and emphasizes the importance of information sharing for the maintenance process. We hypothesize that a well-established systematic communication infrastructure will have a positive impact on information sharing. The importance of knowledge management and information sharing
March 15, 2010
720
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
S. Uusipaavalniemi et al.
in performing operational tasks is highlighted. Section 4 describes a case study, where an empirical analysis of a steel manufacturing industry maintenance service supply chain is presented. Finally, conclusions are drawn in Sec. 5. 2. Theoretical Background Information integration involves the sharing of pertinent knowledge and information among members of a supply chain (Lee and Whang, 2000). In a service supply chain, the information flow is critical in terms of identifying demand, sharing information, establishing expectations through a service level agreement or statement of work clearly defining the scope of the work, and the skills required of service providers, and performance feedback. It is also essential in monitoring ongoing performance to determine the timing and amount of payment (Ellram et al., 2004). Information sharing has been shown to be a central enabler of effective supply chain management (e.g., Cooper et al., 1997; Tan et al., 2002; Moberg et al., 2002; Min and Mentzer, 2004). For example, companies such as Cisco and Dell have successfully utilized information sharing for linking manufacturing operations with upstream and downstream members of the supply chain (e.g., Zhou and Benton, 2007). However, it should be noted that information sharing does not automatically result in better supply chain performance; instead, the proponents of the information processing view (Galbraith, 1974) emphasize that a fit should exist between the information processing needs and the information processing capabilities not only within an organization, but also in inter-organizational interactions (e.g., Premkumar et al., 2005). 2.1. Aspects of Supply Chain Information Sharing The inter-relation between information sharing and process improvements/ integration has been indicated for instance by Bhatt (2000). Also, Alter (1999) views information sharing as part of process integration. The information characteristics in information sharing have been analyzed, for instance, from the aspects of information quality (e.g., Monczka et al., 1998; Moberg et al., 2002; Li and Lin, 2006; Zhou and Benton, 2007) and availability (Gustin et al., 1995). Quality of information sharing encompasses the accuracy, timeliness, adequacy, and credibility of information shared (Moberg et al., 2002; Monczka et al., 1998). Information sharing practices or the extent of information sharing (the quantity aspect of information, the extent to which critical and proprietary information is communicated to a supply chain partner) has been covered by Monczka et al. (1998), Li and Lin (2006), Li et al. (2006), and Zhou and Benton (2007). In addition, another aspect that is commonly mentioned is the technological aspect, which includes both hardware and software needed to support information sharing (e.g., Zhou and Benton, 2007).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
Information Sharing in Service Supply Chain
721
According to Nonaka and Takeuchi (1995), knowledge sharing is done in two ways — by articulation and by socialization. The term “articulation” refers to an individual formatting the fundamentals of his or her own tacit knowledge in an explicit manner that can be stored or shared. In contrast, the term “socialization” refers to the sharing of tacit knowledge among people. The first (“articulation”) involves tacit knowledge becoming explicit, whereas the second (“socialization”) involves tacit knowledge remaining tacit. If knowledge does not become explicit, the organization cannot easily use it. Naaranoja and Hartman (2006) argue that conditions conducive to knowledge-sharing in a company enhance the opportunities for communication and interaction, and that this motivates members of an organization to engage in the transfer of knowledge. The authors found that construction firms were aware of the need to improve knowledge sharing, but there were flaws in the knowledge sharing processes such that knowledge was not transferred from one person to another in an optimal manner, especially at the project level. However, these authors did not study knowledge sharing in industries other than the construction industry and did not discuss the role of information technology in such industry. The positive effects of information technology (IT) on business process improvement have been shown in several other studies (e.g., Mukhopadhyau et al., 1995; Mirani and Lederer, 1998; Bhatt, 2000). The benefits of IT include improvements in customer service, efficiency, information quality, and agility (Auramo et al., 2005). Although IT has an essential role in achieving better supply chain integration and performance, some IT-related issues, such as lack of appropriate IT systems, poor information visibility, and multiple platforms also function as hindrances to integration. (Bagchi and Skjoett-Larsen, 2003). A study by Sohal et al. (2001) indicates that industries only gain moderate benefits from their IT investments and the benefits are mostly related to improving productivity and reducing costs. Service industries have also managed to improve their responsiveness to market needs through IT and seem to have understood better the strategic possibilities of IT. However, the improvements have still been mainly limited to operational areas (Sohal et al., 2001). The inclusion of collaborative aspects when examining information sharing has been suggested by Moberg et al. (2002) and Bailey and Francis (2008). Moreover, Bagchi and Skjoett-Larsen (2003) have considered the collaborative aspect by examining organizational linkages together with information integration. Finally, the time element has also been considered important in information sharing. For instance, timing of information sharing (Wid´en-Wulff and Davenport, 2005) and information lead-time (Stalk and Hout, 1990) have been brought out in the literature. 2.2. Levels of Information Integration The level of integration describes the extent to which integrative activities within one dimension are developed (e.g., Frohlich and Westbrook, 2001; Van Donk and Van
March 15, 2010
722
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
S. Uusipaavalniemi et al.
der Vaart, 2005). Some attempts to identify and analyze levels of integration exist in the literature. Spens (2001) has used five levels of integration to explore process integration. The levels defined by Alter (1999) are common culture, common standards, information sharing, coordination, and collaboration. Spekman et al. (1998) have used three levels (cooperation, coordination, and collaboration) to describe “levels of intensity” among trading partners. In both approaches, the first levels are threshold levels of interaction and at the uppermost level the interdependency of processes/partners is very high. Bagchi and Skjoett-Larsen (2003) have also described the development of integration in three stages: low, medium, and high integration. Van der Vaart and Van Donk (2004) have defined three stages of integration: transparency stage, commitment/coordination stage, and integrative planning stage. From the perspective of information sharing, they can be characterized as follows: • The Transparency Stage: Supply chain members share some relevant information. The incompatibility of information systems and lack of mutual trust (i.e., fear of misuse of information) constitute major barriers for integration. • The Commitment and Coordination Stage: Supply chain members share all relevant information. Trust is an important barrier for integration as well as the incompatibility of information systems. • The Integrative Planning Stage: The planning and control of a supply chain (or a part of it) is more or less centralized. The existence of shared resources (e.g., same supplier is supplying to several buyers) forms a major barrier in this stage. According to Van der Vaart and Van Donk (2004), these stages are overlapping and especially useful for describing a dyadic relationship in a supply chain. The level of integration does not have to be the same for a whole chain or for all supplier–buyer dyads. A dyad does not necessarily have to start with stage 1 and then move up to stage 3. Instead, these stages merely describe the level of integration (Van der Vaart and Van Donk, 2004). 2.3. Framework for Analyzing the Case Supply Chain Based on theoretical study, we have identified six aspects that determine the capabilities for information sharing/integration in a supply chain. All the aspects are interrelated and developing one of the integration elements may lead to improvements in other elements. The capabilities and needs for information sharing together define the level of integration in a supply chain dyad. The framework is presented in Fig. 1. To understand the needs and drivers for information sharing in maintenance operations and the supply chain related to it, we need to describe maintenance more closely. Maintenance can be defined as an activity that aims to optimize the availability and reliability of production equipment and maintain its operability at acceptable cost level (Coetzee, 1997). Maintenance is viewed as a support function for manufacturing companies. However, its strategic importance for manufacturing companies with significant investments in physical assets is considerable and it can
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
Information Sharing in Service Supply Chain
CAPABILITIES FOR INFORMATION SHARING • Processes • Information characteristics • Information technology • Supply chain collaboration • Practices/procedures for sharing • Time aspects
723
LEVEL OF INFORMATION INTEGRATION • Transparency • Commitment and coordination • Integrative planning
NEED FOR INFORMATION SHARING
Figure 1. The level of information integration is defined by the needs and capabilities for information sharing.
truly affect the competitiveness of companies. The way maintenance is performed affects the availability of production facilities, the volume, quality, and cost of production and safety of the operation (Visser, 1998; Tsang, 2002). There have been attempts to take a more holistic approach to maintenance (e.g., Coetzee, 1999; S¨oderholm et al., 2007) and to emphasize its strategic importance (e.g., Tsang, 2002). However, the maintenance supply chain that is, the relationships with external service providers — has not been particularly covered in research. Maintenance and repair services can be classified as equipment support services focusing on property, and are very important to the customer company’s core business activity. Thus, it is important that the service provider has experience in the purchaser’s industry and a good reputation. The service provider should ideally be located nearby to provide emergency service (Fitzsimmons et al., 1998). The approach to maintenance has evolved from reactive maintenance to a more integrated and proactive approach (e.g., Garg and Deshmukh, 2006). This highlights the importance of maintenance planning and thus, the meaning of information sharing. Information is one of the maintenance inputs and information flow should be carefully considered in maintenance management (Visser, 1998). Indeed, sound maintenance management requires a lot of interaction with other business functions (e.g., Garg and Deshmuk, 2006) and supply chain partners such as maintenance service providers and suppliers of materials, spare parts, and tools. Nowadays, maintenance functions, or parts of them, are often outsourced to an external service provider as companies pursue cost reductions and concentrate on their core competences. Thus, external services should be considered as an input to maintenance system or process as well (e.g., Visser, 1998). Outsourcing often leads to a more complicated maintenance supply chains to be coordinated. This will further highlight the importance of swift information flow in maintenance supply chains and emphasizes the need for integrated information systems between the various partners of maintenance supply chain.
March 15, 2010
724
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
S. Uusipaavalniemi et al.
The impact of IT on maintenance management is still a relatively new research area but the opportunities of IT have been recognized by Pintelon et al. (1999). Research concerning information integration in a business service context is still scarce and focused on developments achieved through certain information technologies. For instance, Minkus and Nobs (2006) have discussed improving the use and exchange of information in industrial services with an IT solution. Persona et al. (2007) have analyzed the impact of e-business technologies on maintenance management and supply chain integration. Salmela and Lukka (2005) have examined the development possibilities of e-business in a network between small local maintenance service providers and large forest industry customers. More research dealing with information sharing as a whole and not just information technologies is needed in the service context. Business-to-business services have generally been ◦ gaining less focus in service research thus far (e.g., Ahlstr¨om and Nordin, 2006). 3. Research Methodology The case study method has been an important form of research in the social sciences and management (Chetty, 1996). By combining previously developed theories with new empirically-derived insights (Yin, 2003), the case study approach is especially appropriate in the study of new topics or where less data are available. It can transcend the local boundaries of the investigated cases, capture new layers of reality, and develop novel, testable, and empirically valid theoretical and practical insights (Eisenhardt, 1989; Tsoukas, 1989; Voss et al., 2002). Although it has been argued that case studies are more appropriate for the development of theories than the testing of theories, case study research can actually be used for both (Woodside and Wilson, 2003). It enables a researcher to go beyond a crosssectional “snapshot” of a process (Miles and Huberman, 1994) and helps create a better understanding of the variations of the phenomena under scrutiny (Skaates and Tikkanen, 2003). The service supply chain study introduced in this chapter was carried out during years 2006–2008 as a qualitative case study. The research is part of a larger research project dealing with information interoperability between separate data systems and data storages in steel industry maintenance service supply chain. Along with the focal company (a large Finnish steel manufacturer), two engineering offices providing maintenance-related planning and four service providers offering mechanical maintenance services were included in the study. Information sharing relating to annual reparations was chosen as the focus of interviews as it involves both planned and unplanned maintenance and a large group of external service providers and thus represents the complete maintenance field very well. The research material was mainly collected through structured interviews in seven companies of the steel industry maintenance service supply chain. The structured interviews lasted approximately two hours. The interviews were taped and later a selective transcription of them was made. A summary report of the
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
Information Sharing in Service Supply Chain
725
interviews was written and sent to the interviewees for verification. The interviewees in the focal company (three) were the persons responsible for planning and/or implementing the maintenance of certain production lines. The interviewees in the mechanical maintenance service provider companies (four) were supervisors or other persons responsible for certain maintenance contracts performed at the focal company’s facilities. The persons interviewed in the engineering offices providing maintenance-related planning (two) were the superiors of the mechanical designers. Additional data were also gathered through five workshops, company documents, and company visits. In addition to the focal company, two information system service providers cooperating closely with the focal company were visited and involved in the workshops. Several researchers from the project research group participated in the workshops and company visits. The duration of company visits was two to three hours and two rounds of company visits were made. Each of the five workshops arranged for company representatives lasted three to five hours. The workshops and company visits were also taped and later documented as a short report that was sent to the participants for verification. The contents of interviews and workshops are described in more detail in Sec. 4. Various company documents were received during the company visits and interviews and included, for instance, presentations about the maintenance operations at focal company, minutes of meetings among the maintenance personnel, timetables for the annual reparations and planning related to it, organization charts, and to mention some. Examining only one supply chain may reduce the external validity of this study (see Voss et al., 2002; Yin, 2003). However, multiple supply chain dyads were reviewed to address this problem. Construct validity was ensured by using multiple data collection methods and multiple data sources (see Yin, 2003). Observer bias was reduced by tape-recording the interviews, company visits, and workshops and using multiple observers when possible (see Voss et al., 2002). Reliability of the study was strengthened by using a consistent set of interview questions and carefully documenting each research phase in a project database (e.g., Yin, 2003). 4. Case Study Findings This section first describes how the research data were collected in the case supply chain and how it was processed and analyzed. Then the main findings of the research are discussed. 4.1. Data Collection and Analysis Workshops 1 and 2 dealt with defining the empirical research case and preliminary determination of the key actors, processes, and information to be included in the empirical study. After the two workshops, structured interviews in the case supply chain companies were performed. The interview themes/questions dealt with
March 15, 2010
14:45
726
• • • • • • • • • •
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
S. Uusipaavalniemi et al.
The activities and roles of supply chain actors in the maintenance processes What information is shared in supply chain, and how and when? Procedures for information sharing and their consistency How and which information technology is utilized in supply chain information sharing Hinderances or problem areas related to information sharing (also related to quality, availability, and form of information) Is some critical information missing? Factors facilitating information sharing How performance is measured in the maintenance process/supply chain How the shared/collected information is utilized Preliminary performance effects that could be gained by developing supply chain information sharing.
Based on the interviews, a process chart of the activities, information flows, and exchanged documents in the service supply chain was established. Factors facilitating and hindering information integration were identified. Development areas in case supply chain information sharing were listed. In Workshop 3, the preliminary results of structured interviews were discussed. The group discussion involved classifying the maintenance-related information shared in the supply chain into different forms of information. Additionally, development areas in information integration in the maintenance supply chain were grouped under the six elements of information integration: processes, information characteristics, information technology, information sharing practices, collaboration, and time aspects. Workshop 4 included a group discussion where the most important supply chain performance development targets for the supply chain were identified and the importance of the six different information integration elements for the performance development was assessed. Workshop 5 was organized to prioritize the problem areas of information sharing, which were identified in the interviews. The main causes of the problems were identified based on an Ishikawa diagram. The problem areas were prioritized by giving them “problem priority numbers,” i.e., assessing the probability of occurrence of the problem, the meaning of problem, and probability of detecting the problem. Ishikawa diagrams were also employed when conceiving practical development guidelines for eliminating the problem areas in the supply chain’s information sharing. Finally, the development ideas were prioritized and a consensus on three main development lines was formed. 4.2. Information Sharing in Case Supply Chain Dyads As there are two different types of service providers involved in the case supply chain, two types of dyads should be examined. We first take a look at the engineering
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
Information Sharing in Service Supply Chain
727
office-focal company dyads and continue with the mechanical maintenance service provider-focal company dyads. In discussions with the engineering offices providing maintenance-related planning services to the focal company, it was clearly stated that the focal company is their key customer: they have long-term contracts with it, and a certain number of resources is reserved for its use. Information sharing between the focal company and engineering offices occurs in three phases: before, during, and after the maintenance work. This classification is analogous to the one presented by McFarlane et al. (2008) in which information related to services is divided into three categories: offering information, delivery information, and evaluation information. The emphasis of information sharing is clearly on the phase before the actual maintenance work is done as the planned parts or entities have to be ready before the maintenance work can be performed. The most important information to be shared is initial data and specifications for the planning work, drawings, information about components used and feedback. Engineering offices have limited access to the maintenance information system and drawing archive of the focal company. However, informal information sharing and sharing of tacit knowledge play a crucial role before the maintenance work, when specifying the extent of the planning work. The integration of activities between the focal company and mechanical maintenance service providers is restricted to short-term contracts. The number and scope of contracts with each service provider during the annual reparations vary every year. Information sharing between the focal company and mechanical maintenance service providers also takes place in three stages: before, during, and after performing the maintenance work (offering, delivery, and evaluation of service). Information sharing during the maintenance work is the most challenging and critical phase as there are a large number of actors simultaneously dispersed in the focal company’s production facilities, working in several shifts that should receive timely information as fast as possible. The most critical information to be shared is the schedules, the content of the maintenance work, safety issues, information on material locations, and feedback. There are no structured information sharing systems and information is mainly shared informally. The mechanical maintenance service providers do not have any access to focal company’s information systems and vice versa. Electronic information sharing is thus restricted to the use of e-mail. 4.3. Identifying Development Areas in Information Sharing Based on the interviews and classification done in the fifth workshop, the most essential problem areas in information sharing in the supply chain include the following: • Information is dispersed in different information systems and sources • Information does not flow fast enough and there are problems in timing of information sharing
March 15, 2010
728
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
S. Uusipaavalniemi et al.
• Information does not always reach all the intended recipients • Information is not sufficiently documented/updated • Information shared is often incomplete, not updated or presented in a form that is clear enough for the users of information • Information gathered/received is not utilized comprehensively These problems clearly reflect the six aspects of information sharing used in the framework of this study. Most of the problems were attributed to the deficiencies in the procedures for information sharing and people who are using, mediating, or providing the information. Information sharing is very person-dependent and large proportion of the information needed only exists as tacit knowledge. To reduce the number of disruptions caused by human involvement, the degree of electronically shared information should be increased. Information sharing practices between the supply chain actors vary and the same information is often shared in several different forms. The lack of common procedures for sharing and utilizing information has led to a situation where the procedures have just been formed during the cooperation, and are not documented or developed. 4.4. Level of Information Integration in Case Supply Chain Dyads To roughly estimate the level of information integration, i.e., information sharing in the case supply chain dyads, the classification by Van der Vaart and Van Donk (2004) was used. Information integration in dyads between engineering offices and the focal company settles between the integration levels 1 and 2. Much of the relevant information can be shared electronically, which improves the quality of shared information. However, the information systems coverage should be extended to cover some additional information (such as information on failures and breakdowns and data on previously performed repairs for machines and devices) and enable electronic signatures, for example. Defining the planning specifications still requires personal contacts and necessitates developing efficient procedures for informal information sharing to speed up the information sharing and thus the planning process itself. The level of information integration in dyads between mechanical maintenance service providers and the focal company is at the first level of integration. There are no shared information systems and therefore the quality and availability of information may suffer and the variety of information that can be shared is limited. For instance, viewing drawings three-dimensionally may be impossible for mechanical maintenance service providers. In further developing the integration, specific attention should be paid to increasing the degree of electronically shared information to improve the quality and availability of information and improve its presentation. This would also improve the timing and shorten the lead-time. Moreover, as with engineering offices, efficient procedures for informal information sharing should be established and shared information better utilized. There are discussions about
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
Information Sharing in Service Supply Chain
729
moving to long-term contracts with mechanical maintenance service providers, which will facilitate collaborative planning of the maintenance work. This will necessitate more rigorous documentation and storage of the maintenance information to improve its usefulness in planning and decision making. 4.5. Outline for Developing the Information Sharing in Case Supply Chain An outline for developing information sharing was discussed with the companies. A centralized maintenance information system, which the service providers would have at least limited access to would address several problem areas found in the case supply chain. In particular the entering of data/information into systems should be made as easy as possible, so that all personnel are able to do it even during hectic maintenance work. However, information systems cannot fully replace human-tohuman communication in maintenance operations as there is so much tacit knowledge and many unique problem-solving situations involved. Thus, it should be carefully considered how to effectively organize the informal, human-to-human information sharing in the maintenance service supply chain. This is especially true of the problem of tacit knowledge, which necessitates thinking of new systematic procedures to share it. Thus, above all, common procedures for sharing and analyzing maintenance-related information in the case supply chain should be defined and all supply chain members should be committed to follow them. Achieving this commitment necessitates that the service providers are involved in establishing the procedures and long-term contracts with service providers are made. 4.6. Discussion on the Research Findings This study contributes to the thus far limited research on business services and information sharing related to service supply chains. The maintenance supply chain from steel industry presented in this chapter justifies the need for information integration in business services. By paying special attention to information flows in service operations, improvements in supply chain performance can be achieved. This study shows the information integration needs and possibilities of a certain service segment serving a certain type of industry. This is needed as the research related to services thus far has been at a very general level. Providing a description of information sharing and identifying the development areas related to it was perceived as useful in the case supply chain companies. The problem areas related to information sharing identified in this study are also relevant to other service supply chains. This study explored pure business-to-business services, where there are no products involved. The pure service focus is important and the knowledge gained in this research may be used for comparison between industrial services bundled into a product and pure business services in further research related to service SCM.
March 15, 2010
730
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
S. Uusipaavalniemi et al.
One limitation of the study is the generalizability of the results, as the research environment was restricted to only one service industry segment. Continuing research is needed for creating models or frameworks that explain the role of information sharing in different contexts of service SCM. The study could be expanded to cover service providers with differing degrees of responsibility for the customer’s operations. For instance, the service provider’s responsibility can range from project-type contracts to being responsible for the availability of the customer’s production equipment or even the customer’s performance. It is also worth noting that some of the problems related to information sharing in maintenance operations derive from the deficits in internal integration between different processes (mainly between production, purchasing, and maintenance) in the focal company. This problem should also be addressed before the information sharing in the service supply chain can be taken to the next level. The majority of research on supply chain integration and information sharing concentrates on dyads between the focal company and suppliers, customers, or retailers. Only a minority explores both downstream and upstream integration (see Van der Vaart and Van Donk, 2008). This is also a limitation to be considered in this study. The scope of the study could be extended to involve the customers of the steel manufacturer. Information shared between the focal company and its customers also relates to the maintenance operations and maintenance supply chain. Some generic information categories that affect both the customers’ and maintenance service providers’ operations include: • Schedules of Repairs: Repairs cause a shutdown at the production line and they are usually performed during periods where there are less orders, e.g., the annual reparations typically during the summer holiday season. • Production Quantities: The order quantities (and thus, production quantities) affect the amount of wear of some parts the production line and thus in the maintenance interval. • Product quality and other attributes related to the product. Thus, the inclusion of the customer link could bring a new perspective the examination of service supply chain information sharing. The nature of the maintenance process offers also interesting aspects to be explored. A question remains: what is the ideal level of information integration in a support process like maintenance? Moreover, the problem-solving aspect maintenance necessitates utilizing tacit knowledge and informal information sharing, and there are a number of organizational and management-related aspects in the background affecting the smoothness of this kind of information and knowledge exchange. 5. Conclusion More attention should be paid to developing information and knowledge sharing in service supply chains as information is critical in offering, delivering, and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
Information Sharing in Service Supply Chain
731
evaluating services. Our chapter suggests six aspects to be considered when examining information integration in the service supply chain: processes, information characteristics, information technology, information sharing practices, collaboration, and time aspects. We argue that analyzing the level of information integration in supply chain dyads is a useful tool for identifying the development areas and adapting a more holistic and structured approach for information sharing. A more structured approach to information sharing involving agreed-on procedures and systems and a commitment to utilizing them will make it easier for the supply chain to control, manage, and further develop the dynamic information-sharing process related to service operations. Additional empirical research and theories are needed to develop the service SCM genre further. Research concerning the contextual factors affecting service supply chain information sharing should be carried out to create a model to match the information sharing needs, processes, systems and practices in different types of service supply chain dyads or service contracts. Moreover, research concerning information technologies that specifically addresses the challenges in information sharing caused by the special features of services is required. Finally, links between service supply chain performance and the different elements of information sharing should be further explored. Acknowledgments We would like to express our gratitude to the Finnish Funding Agency for Technology and Innovation for project funding and thank all the companies in the research project for their participation and successful cooperation. References ◦
Ahlstr¨om, P and F Nordin (2006). Problems of establishing service supply chain relationships: Evidence from a high-tech manufacturing company. Journal of Purchasing & Supply Management, 12(2), 75–89. Allen, S and A Chandrashekar (2000). Outsourcing services: The contract is just the beginning. Business Horizons, 43(2), 25–34. Alter, S (1999). Information Systems: A Management Perspective, 3rd Ed. Reading: Addison Wesley. Auramo, J, J Kauremaa and K Tanskanen (2005). Benefits of IT in supply chain management: An explorative study of progressive companies. International Journal of Physical Distribution & Logistics Management, 35(2), 82–100. Bagchi, P and T Skjoett-Larsen (2003). Integration of information technology and organizations in a supply chain. The International Journal of Logistics Management, 14(1), 89–108. Bailey, K and M Francis (2008). Managing information flows for improved value chain performance. International Journal of Production Economics, 111(1), 2–12.
March 15, 2010
732
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
S. Uusipaavalniemi et al.
Baltacioglu, T, E Ada, M Kaplan, OYurt andY Kaplan (2007). A new framework for service supply chains. The Service Industries Journal, 27(2), 105–124. Bhatt, GD (2000). An empirical examination of the effects of information systems integration on business process improvement. International Journal of Operations & Production Management, 20(11), 1331–1359. Chetty, S (1996). The case study method for research in small- and medium-sized firms. International Small Business Journal, 15(1), 73–85. Closs, DJ and K Savitskie (2003). Internal and external logistics information technology integration. The International Journal of Logistics Management, 14(1), 63–76. Coetzee, J (1997). Maintenance. Pretoria: Maintenance Publishers. Coetzee, J (1999). A holistic approach to the maintenance problem. Journal of Quality in Maintenance Engineering, 5(3), 276–280. Cooper, M, D Lambert and J Pagh (1997). Supply chain management: More than a new name for logistics. The International Journal of Logistics Management, 8(1), 1–14. Eisenhardt, K (1989). Building theories from case study research. Academy of Management Review, 14(4), 532–550. Ellram, L, W Tate and C Billington (2004). Understanding and managing the services supply chain. The Journal of Supply Chain Management, 40(4), 17–32. Ellram, L, W Tate and C Billington (2007). Services supply management: The next frontier for improved organizational performance. California Management Review, 49(4), 44–66. Fitzsimmons, J, J Noh and E Thies (1998). Purchasing business services. Journal of Business & Industrial Marketing, 13(4/5), 370–380. Frohlich, M and R Westbrook (2001). Arcs of integration: An international study of supply chain strategies. Journal of Operations Management, 19(2), 185–200. Galbraith, J (1974). Organization design: An information processing view. Interfaces, 4(3), 28–36. Garg, A and S Deshmukh (2006). Maintenance management: Literature review and direction. Journal of Quality in Maintenance Engineering, 12(3), 205–238. Gustin, C, P Daugherty and T Stank (1995). The effects of information availability on logistics integration. Journal of Business Logistics, 16(1), 1–21. Homburg, C and B Garbe (1999). Towards an improved understanding of industrial services: Quality dimensions and their impact on buyer–seller relationships. Journal of Businessto-Business Marketing, 6(2), 39–71. Komonen, K (2002). A cost model of industrial maintenance for profitability analysis and benchmarking. International Journal of Production Economics, 79(1), 15–31. Lee, H and S Whang (2000). Information sharing in a supply chain. International Journal of Technology Management, 20(3/4), 373–387. Li, S and B Lin (2006). Accessing information sharing and information quality in supply chain management. Decision Support Systems, 42(3), 1641–1656. Li, S, B Ragu-Nathan, TS Ragu-Nathan and S Subba Rao (2006). The impact of supply chain management practices on competitive advantages and organizational performance. Omega (The International Journal of Management Science), 34(2), 107–124. McFarlane, D, R Cuthbert, P Pennesi and P Johnson (2008). Information requirements in service delivery. In Proceedings of the 15th International Annual Euroma Conference. University of Groningen, the Netherlands, June 15–18, 2008.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
Information Sharing in Service Supply Chain
733
Miles, M and M Huberman (1994). Qualitative Data Analysis. Thousand Oaks, CA: Sage Publications. Min, S and JT Mentzer (2004). Developing and measuring supply chain concepts. Journal of Business Logistics, 25(1), 63–99. Minkus, A and A Nobs (2006). Improving the use and exchange of information in industrial service organisations. In Exploiting the Knowledge Economy: Issues, Applications and Case Studies, Cunningham, P and M Cunningham (eds.), 1173–1180. Amsterdam: IOS Press. Mirani, R and A Lederer (1998). An instrument for accessing the organizational benefits of IS projects. Decision Sciences, 29(4), 803–838. Moberg, C, B Cutler, A Gross and T Speh (2002). Identifying antecedents of information exchange within supply chains. International Journal of Physical Distribution & Logistics Management, 32(9), 755–770. Monczka, RM, KJ Petersen, RB Handfield and GL Ragatz (1998). Success factors in strategic supplier alliances: The buying company perspectives. Decision Science, 29(3), 5553–5577. Mukhopadhyay, T, S Kekre and S Kalathur (1995). Business value of information technology: A study of electronic data interchange. MIS Quarterly, 19(2), 137–156. Naaranoja, M and A Hartman (2006). Improving the conditions for knowledge sharing within construction firms. In Joint International Conference on Computing and Decision Making in Civil and Building Engineering, June 14–16, Montreal, Canada, 2006. Nie, W and DL Kellogg (1999). How professors of operations management view service operations?. Production and Operations Management, 8(3), 339–355. Nonaka, I and H Takeuchi (1995). The Knowledge Creating Company. Oxford: Oxford University Press. Persona, A, A Regattieri, H Pham and D Battini (2007). Remote control and maintenance outsourcing networks and its applications in supply chain management. Journal of Operations Management, 25(1), 1275–1291. Pintelon, L, N Du Preez and F Van Puyvelde (1999). Information technology: Opportunities for maintenance management. Journal of Quality in Maintenance Engineering, 5(1), 9–24. Premkumar, G, K Ramamurthy and CS Saunders (2005). Information processing view of organizations: An exploratory examination of fit in the context of interorganizational relationships. Journal of Management Information Systems, 22(1), 257–294. Rautaruukki Annual Report 2007 (2008). Helsinki: Rautaruukki Corporation. Salin, K (2007). Kunnossapidolla tehot irti investoinneista. Tekniikka & Talous, 25.10.2007, 22–23. Salmela, E and A Lukka (2005). Value added logistics in supply and demand chain SMILE, Part 2: E-business in a service business case: A maintenance and operations network in forest industry. Research Report 163, Department of Industrial Engineering and Management, Lappeenranta University of Technology, Lappeenranta. Sandhu, M and A Gunasekaran (2004). Business process development in project-based industry — A case study. Business Process Management Journal, 10(6), 673–690. Sengupta, K, D Heiser and L Cook (2006). Manufacturing and service supply chain performance: A comparative analysis. The Journal of Supply Chain Management, 42(4), 4–15.
March 15, 2010
734
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
S. Uusipaavalniemi et al.
Skaates, M and H Tikkanen (2003). International project marketing: An introduction to the INPM approach. International Journal of Project Management, 21(7), 503–510. Sohal, A, S Moss and L Ng (2001). Comparing IT success in manufacturing and service industries. International Journal of Operations and Production Management, 21(1/2), 30–45. Spekman, R, J Kamauff and N Myhr (1998). An empirical investigation into supply chain management: A perspective on partnerships. Supply Chain Management, 3(2), 53–67. Spens, K (2001). Managing Critical Resources Through Supply Network Management: A Study of the Finnish Blood Supply Network. Helsinki: Swedish School of Economics and Business Administration. Stalk, G and T Hout (1990). Competing Against Time: How Time-Based Competition is Reshaping Global Markets. London: Free Press. S¨oderholm, P, M Holmgren and B Klefsj¨o (2007). A process view of maintenance and its stakeholders. Journal of Quality in Maintenance Engineering, 13(1), 19–32. Tan, KC, SB Lyman and JD Wisner (2002). Supply chain management: A strategic perspective. International Journal of Operations and Production Management, 22(6), 614–631. Tsang, A (2002). Strategic dimensions of maintenance management. Journal of Quality in Maintenance Engineering, 8(1), 7–39. Tsoukas, H (1989). The validity of idiographic research explanations. Academy of Management Review, 14(4), 551–561. Van derVaart, T and DVan Donk (2008).A critical review on survey-based research in supply chain integration. International Journal of Production Economics, 111(1), 42–55. Van der Vaart, T and D Van Donk (2004). Buyer focus: Evaluation of a new concept for supply chain integration. International Journal of Production Economics, 92(1), 21–30. Van Donk, D and T Van der Vaart (2005). A critical discussion on the theoretical and methodological advancements in supply chain integration research. In Research Methodologies in Supply Chain Management, Kotzab, H, S Seuring, M M¨uller and G Reiner (eds.) pp. 32–46. Heidelberg: Physica-Verlag. Visser, JK (1998). Modelling maintenance performance: A practical approach. In IMA Conference, 1–13, Edinburgh. Voss, C, N Tsikriktsis and M Frohlich (2002). Case research in operations management. International Journal of Operations and Production Management, 22(2), 195–219. Yin, R (2003). Case Study Research: Design and Methods, 3rd Ed. Thousand Oaks: Sage Publications. Wid´en-Wulff, G and E Davenport (2005). Information sharing and timing: Findings from two Finnish organizations. In Context: Nature, Impact and Role: 5th International Conference on Conceptions of Library and Information Sciences, CoLIS2005, Glasgow, UK, June 4–8 2005 Proceedings, LNCS 3507, Crestani F and I Ruthven (eds.), 32–46. Berlin/Heidelberg: Springer. Woodside, A and E Wilson (2003). Case study research methods for theory building. Journal of Business and Industrial Marketing, 18(6/7), 493–508. Zhou, H and WC Benton Jr. (2007). Supply chain practice and information sharing. Journal of Operations Management, 25(6), 1348–1365.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
Information Sharing in Service Supply Chain
735
Biographical Notes Sari Uusipaavalniemi is a researcher in the Department of Industrial Engineering and Management at the University of Oulu. She has a Master’s degree in Engineering (University of Oulu, 2001) and is now finalizing her doctoral studies. Her research interests focus on supply chain management, in particular supply chain information sharing, supply chain performance and service supply chains. Jari Juga earned his doctoral degree at the Turku School of Economics, Finland, in 1996. He then worked as Associate Professor at the Norwegian School of Management/BI and as Senior Researcher at the Technical Research Centre of Finland. He was appointed Professor of Logistics at the University of Oulu in 2000. Dr. Maqsood Sandhu has been working as Associate Professor, Assistant Professor, Lecturer and Researcher at the University of Oulu, UAE University, University of Vaasa and Hanken from 1999 to 2008. Prior to joining academia, Sandhu had been working about five years at W¨artsil¨a Finland Oy from 1996 to 2001. He is author and co-author of about 15 international journal articles and book chapters. Currently, his research areas include project management, supply chain management and knowledge management.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch30
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
Chapter 31
RFID Applications in the Supply Chain: An Evaluation Framework VALERIO ELIA∗ , MARIA GRAZIA GNONI† and ALESSANDRA ROLLO‡ Department of Engineering for Innovation, Universit`a del Salento, Campus University Ecotekne, Via per Monteroni, 73100 Lecce, Italy ∗ [email protected] † [email protected] ‡ [email protected]
Mass customization combines the contrasting elements of mass production and customization. This provides industrial organizations with a high level of flexibility and effectiveness, enabling them to gain a competitive advantage. In this field, control technologies could play a key role in supplying automatic access to information. One of the emerging technologies is radio frequency identification (RFID). RFID system now represents one of the technologies which could transform processes across supply chains (SC). Major benefits of RFID applications in SC management are oriented toward sharing information with partners, tracing objects, and share information. While an increasing interest in RFID technology can be found in scientific literature, few studies are focused on the performances of RFID technology applications in SC management. First, this chapter proposes, a classification of papers found in a scientific database from 2000 to 2008. Second, a general framework based on the Supply Chain Operations References (SCOR) model, which aims to define a standardized tool for evaluating performances in SCs of RFID applications. Keywords: Supply chain management; RFID applications; SCOR model.
1. Introduction In the global market, supply chain (SC) management is becoming ever more complex; warehousing and distribution activities require tools which provide dynamic and fast changes to maintain a competitive advantage. One of the main problems in SC management is achieving accurate monitoring and measurement of resources in the SC (Chow et al., 2006). Radio Frequency IDentification (RFID) represents an effective tool for identifying and tracing objects through an SC. The RFID system is based on wireless sensor technology, which makes it possible to scan objects quickly and to manage large volumes of multiple data sets. 737
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
738 V. Elia et al.
RFID technology could potentially improve the performance of traditional tools — such as barcodes — and its use is rapidly spreading (Singh et al., 2008). The major benefits of RFID applications in SC management are in being able to share information with partners; in allowing collaboration on inventory management, planning, forecasting, and replenishment (Vijayaraman, 2006); and in trying to reduce inventory, transportation and warehouse costs, and stock-out problems. Due to the rapid growth in RFID applications, literature about this topic is now on the increase. The aim of this chapter is to propose an analysis of the current state of application of RFID in SC management. This chapter will then also attempt to put forward a general framework based on key performance indicators to assess the impact of RFID technology in the SC of physical goods. The framework proposed is based on the well-known supply chain operations reference (SCOR) model proposed by the Supply Chain Council. The chapter is organized as follows: in Section 2, a bibliometric analysis of RFID application in SC management is proposed; a classification set out on three levels, aiming to evaluate main chapter topics. Following the literature review and classification, a hierarchical framework will be put forward for a standardized method of evaluating how RFID technology could affect SC global performances. In Section 3, an application of the proposed framework is described: a test has been conducted based on results provided by several papers found in the scientific literature. 2. RFID Applications in SC Management: A Critical Review An analysis of scientific literature from 2000 to 2008 has been carried out aimed at assessing the growth of interest in the field of RFID applications. More specifically, the main aim is that of evaluating the state of the art regarding RFID applications in SC management (papers regarding RFID applications in different fields — such as library, government, etc. — have not been included in this study). The analysis looked at articles from four publishers: Elsevier, Emerald, SpringerLink, and Wiley Interscience. The keywords “RFID” were used to search the databases analyzed. More than 100 papers were found in these databases: finally, 108 papers were analyzed; several papers were not analyzed because the topics proposed did not concern RFID applications in SC management (Fig. 1). The papers have been classified into three different levels. The first level regards the paper type, identified by examining the main purpose defined by its authors; five paper typologies have been introduced, which are described in detail below (Fig. 2): • Technological: Papers which are mainly focused on the technological aspects of RFID applications in SCs. Papers which are focused on aspects which affect RFID technology development were not included. • Conceptual: Papers which describe mainly analytical models for integrating RFID technologies in SC management.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
RFID Applications in the Supply Chain 739
Figure 1. Total papers analyzed divided by publishers.
Figure 2.
Distribution of papers analyzed according to the first level of classification.
• Architectural: Papers which propose an integrated framework for RFID applications. • Practical: Papers which propose case and field studies, simulations, etc. • General Issue: Papers which propose analysis of main issues regarding RFID technology and its applications in SCs. The second and the third levels of classification concern the main topic and the secondary topic which the paper deals with. More specifically, for the second level, the following topics were proposed (Fig. 3): • Asset Tracking: This topic concerns models and tools based on RFID application for identifying, tracing different items in an SC. • Operations Management: This topic concerns papers which propose application of RFID in the field of operations aiming to improve efficiency. • SC Design: This topic concerns papers which propose an integration of RFID in the whole SC.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
740 V. Elia et al.
Figure 3.
Framework of the second and the third level of classification.
• Strategic Context: This topic concerns papers that analyze RFID application in specific contexts — i.e., China, the USA markets — and/or propose strategic analysis of RFID application in SCs. Definitions introduced for the third level are • Privacy: Papers which analyze the privacy issue in SC management due to RFID application. • Inventory, Packaging and Manufacturing: These three topics concern the impact of RFID in inventory management, packaging, and manufacturing activity. • Logistics, Food, Retail and Construction: These topics concern papers, which propose RFID applications in these specific contexts. • Business Value and Innovation: These two topics concern the analysis of performance increase versus cost potentially due to RFID applications in several contexts such as organization, production, etc. Firstly, the papers analyzed were classified according to the second level topic structure proposed previously; subsequently the papers were analyzed according to the third level topic structure. The distribution of analyzed papers according to second and third level classification is reported in Fig. 4. The largest group obtained is the SC design group where 46 papers have been included: this is due to the increasing interest in evaluating RFID applications in
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
RFID Applications in the Supply Chain 741
Figure 4. Distribution of papers analyzed according to the second and the third level of classification.
the whole SC rather than just at a single level. Detailed results obtained from the second/first level classification are reported in Table 1. The first level represents the paper type. This issue is confirmed by the result obtained for the strategic context/general issue group, which is the largest evaluated at second level classification. In Table 2 the third/second first level classification is summarized. In this last classification, the largest group is represented by the food/SC design group; the food sector represents an industrial sector, where RFID technology could be applied effectively, especially in food traceability. The high value of RFID applications in the construction sector is also noteworthy, where RFID could represent an effective tool for object and resource traceability. 3. The Proposed General Framework for Analyzing RFID Applications in an SC The general framework proposed in this chapter for evaluating how RFID applications could affect overall performances of a supply chain. The framework is based on the well-known SCOR model proposed by the Supply Chain Council at first in 1996. This approach is characterized by a hierarchical structure, which permits a standardized method of evaluating strategic and operational issues in an SC. A description of the model proposed will now be analyzed. 3.1. The Supply-Chain Operations Reference Model: General Issue The SCOR model was defined by the Supply Chain Council (SCC) as a standardized process reference model of an SC, and has been continuously enhanced. The
Paper Classification According to the Second Level.
Operations management
Asset tracking
SC design
Strategic context
– –
[Coronado Mondragon 2006]
Conceptual
[Yun], [Song]
[Rekik], [Yoo], [U¸ckun], [Szmerekovsky], [de Kok], [Kim (a)], [Saygin], [Doerr], [Chande]
[Rekik], [Li (a)]
–
Architectural
[Chen], [Keskilammi], [R¨omer], [Brignone]
[Huang (a)]
[Ngai (a)], [Lee], [Zhou], [Wang (a)], [Qiu], [Chow (a)], [Huang (b)], [Wang], [Regattieri], [Chow (b)], [Kelepouris], [Spieß], [Folinas]
[Kahhat], [Tajima]
Practical
[Bohn]
[Kim (b)], [Goodrum]
[Thiesse], [Wang (b)], [Bottani], [Song], [Lee], [Clarke], [Peyret], [Singh], [Ngai], [Karkkainen], [Connolly], [Hou], [Sommerville], [Mousavi]
–
General Issue
[Konomi], [Roussos]
–
[Ngai (b)], [Ergen], [Chao], [McMeekin], [Kumar], [Bendavid], [Xiao], [Barut], [Smith], [Ranky], [Sellitto], [Vijayaraman], [Spekman], [Palsson], [Jones], [Prater], [Twist]
c [Fosso Wamba], [Kim ], [O’Leary], [Parlikad], [Domdouzis], [Curtin], [Wu], [Lockton], [Luckett], [Knospe], [Scott Erickson], [Glasser], [Soppera], [Peslak], [Wright], [Warren], [Bendoly], [Bean], [Thompson], [Yam], [Li (b)], [Sheffi], [Angeles], [Lai], [Wyld], [Kelly]
b778-ch31
[Mourtzis], [Higuera], [Vrba]
SPI-b778
[Patil], [Jedermann], [Abad], [Penttil¨a], [Zampolli], [He], [Jeffery], [Shih], [K¨arkk¨ainen]
WSPC/Trim Size: 9.75in x 6.5in
Technological
14:46
First level topic
742 V. Elia et al.
Second level topic
March 15, 2010
Table 1.
March 15, 2010
Table 2.
Paper Classification According to the Third Level. Second level topic Operations management
Asset tracking
SC design
14:46
Third level topic
Strategic context [Fosso Wamba], [O’Leary], [Parlikad], [Curtin], [Wu], [Luckett], [Peslak], [Bendoly], [Li (b)], [Lai]
Construction
[Song]
[Goodrum]
[Wang (a),(b)],[Song],[Lee], [Ergen],[Sommerville],
-
Food
[Jedermann], [Abad]
[Chande 2005]
[Li (a)], [Ngai (a), (b)], [Regattieri], [Bottani], [McMeekin], [Kumar], [Kelepouris], [Folinas], [Prater], [Mousavi]
[Thompson]
Innovation
–
–
[Karkkainen]
[Tajima], [Domdouzis], [Scott Erickson], [Wright], [Warren], [Bean], [Sheffi], [Wyld]
Inventory
[Brignone]
[Yoo], [U¸ckun], [Szmerekovsky], [de Kok], [Kim (a)], [Saygin], [Doerr]
–
–
Logistics
[Chen], [Keskilammi], [R¨omer], [Penttil¨a], [Zampolli]
[Kim (b)]
[Chow], [Chow], [Clarke], [Ngai], [Vijayaraman]
[Kahhat]
Manufacturing
– –
[Zhou], [Qiu], [Huang (b)], [Thiesse], [Peyret], [Spieß], [Hou] [Singh], [Connolly]
[Coronado Mondragon]
Packaging
[Mourtzis], [Higuera], [Vrba], [Huang (a)] –
Privacy
–
–
–
[Lockton], [Knospe], [Glasser], [Soppera], [Kelly]
Retail
[Roussos]
[Rekik]
[Rekik], [Lee], [Bendavid], [Karkkainen], [Sellitto], [Jones]
[Kim(c) ], [Angeles]
[Yam]
b778-ch31
[Chao],[Xiao],[Barut],[Smith], [Ranky], [Spekman], [Palsson], [Twist]
SPI-b778
-
WSPC/Trim Size: 9.75in x 6.5in
[Yun], [Patil], [Penttil¨a], [Konomi], [He], [Jeffery], [Shih]
RFID Applications in the Supply Chain 743
Business value
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
744 V. Elia et al.
SCOR model integrates the well-known concepts of business process reengineering, benchmarking, and process measurement into a cross-functional framework. It captures the “as-is” state of a process and then derives the desired “to-be” future state (Huanga et al., 2005). SCOR has been successfully used as a basis for SC improvement for global projects as well as for site-specific projects (Poluha, 2007; Supply Chain Council, 2007). Therefore, the SCOR model has been chosen for evaluating strategic potential performances of RFID applications in SC management. The SCOR is a hierarchical model with several levels. Figure 5 shows the general structure of the SCOR and all process levels. The model is based on five fundamental base processes: the so-called plan, source, make, deliver and return process. These represent Level 1 processes because complex SCs are made up of multiple combinations of these basic processes. Processes are further decomposed into process categories (Level 2) depending on the type of environment in which the SCOR model is applied. Process categories contain several process elements, which
Figure 5. The SCOR process levels (Supply Chain Council, 2007).
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
RFID Applications in the Supply Chain 745
represent the third level in the SCOR model. At Level 3, elements contain performance attributes, metrics, and best practices for that selected element. Following levels of decomposition are generally neglected in the SCOR due to the fact that they are closely linked to the type of implementation characterizing each business; therefore, they mainly concern day-to-day management. The SCOR model Levels 1 and 2 metrics are useful for evaluating the performances of the whole SC, while Level 3 metrics support on-going diagnosis (Huanga et al., 2005; Loebbecke, 2005; Poluha, 2007; Supply Chain Council, 2007). SCOR also defines five generic performance attributes — reliability, responsiveness, flexibility, cost, and assets — and three levels of measurement, one for each of the three levels of the model; these could supply focused operational analysis of the SC. These attributes will be applied for evaluating RFID applications in an SC. 3.1.1. The SCOR levels analysis As previously mentioned, Level 1 is the top level which deals with process types, which are: plan, source, make, deliver, and return. The Plan process consists of processes that balance aggregated demand and supply to develop a course of action, which best meets business goals. Plan processes deal with demand/supply planning, which include assessing supply resources, demand and inventory management, and rough-cut capacity planning for all products and all channels in the SC. The Source process affects activities involved in good procurement planned or actual demand management. Sourcing/material acquisition includes the jobs of obtaining, receiving, inspecting, holding, and issuing material. The Make process includes all functions that transform goods to a finished state aiming to meet planned or actual demand. It represents the core process, where actual production execution takes place. The Deliver process involves activities that provide finished goods and services to meet planned or actual demand. It typically includes order, transportation, and distribution management. The Return process regards managing the reverse flow of material and information related to defective, surplus, and MRO products (maintenance, repairs, and operating) (Huanga et al., 2005). Level 1 measures are generally applied to evaluate SC performance as a whole and to attribute it to the overall effectiveness of the SC. The SCOR model Level 1 metrics characterize performance from customer- and internal-facing perspectives. Therefore, at Level 1, the basis of competition is defined and strategic guidelines are provided; the SC is modeled taking into consideration asset, product volume and mix, and technology requirements and constraints. Table 3 shows five performance attributes and metrics proposed at Level 1 by the SCOR. The SCOR model Level 1 metrics characterize performance from customer- and internal-facing perspectives. Therefore, at Level 1, the basis of competition is defined and broad guidelines are provided to meet business targets. At Level 1, an SC is modeled considering asset, product volume and mix, and technology requirements and constraints (Huanga et al., 2005; Loebbecke, 2005; Poluha, 2007; Supply Chain Council, 2007).
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
746 V. Elia et al.
Table 3. Performance attribute SC delivery reliability
SCOR performance attributes and Level 1 metrics. Performance attribute definition
SC costs
Effectiveness in product supply, to the correct customer in terms of place, time, condition, quantity, documentation The velocity at which an SC provides products to customer The agility of an SC in responding to marketplace changes Operating costs in the SC
SC asset management efficiency
The effectiveness of an organization in managing all assets.
SC responsiveness
SC Flexibility
Level 1 metric Delivery performance Fill rates Perfect order fulfillment
Order fulfillment lead times
SC response time Production flexibility Cost of goods sold Total SC management costs Value-added productivity Warranty/returns processing costs Cash-to-cash cycle time Inventory days of supply Asset turns
Source: Supply Chain Council (2007).
Level 2 details different categories within the Level 1 processes. At this level, processes are configured in line with SC strategy. The SCOR model proposes three different process types at this level: • Planning: This category refers to balancing out the aggregate demand over a certain planning horizon. This process is referenced to the plan process defined at the previous level. • Execution: This category includes processes, which are triggered by planned or actual demand, and consequently could modify product state. They include dispatching and sequencing, changes in materials, and services and product movement. This type of process therefore incorporates the main processes source, make, deliver and return. • Enable or Infrastructure: This category refers to defining, maintaining, and monitoring information or relationships; the two previous categories are closely linked to this type. Therefore, the SCOR model provides a tool kit of process categories at Level 2, which allow a generic SC configuration to be represented. The main purpose is to
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
RFID Applications in the Supply Chain 747
Figure 6. The SCOR infrastructure (Supply Chain Council, 2007).
redesign the SC configured in Level 1, aiming to analyze expected performances. Furthermore, at Level 2, market constraints, product constraints, and company constraints are considered to configure the inter- and intra-company process categories (Huanga et al., 2005; Loebbecke, 2005; Poluha, 2007; Supply Chain Council, 2007). Figure 6 shows the infrastructure of the SCOR model taking only the first two levels into consideration. Level 3 makes it possible to define the processes identified in detail, as well as the performance metrics and best practices evaluated for each SC activity. Interand intracompany process elements are defined. Performance levels and practices are defined for these process elements. Specific tasks to be performed at this level include: developing process models which support strategic objectives and work within the new SC configuration developed at Level 2, setting process metrics and performance targets, establishing business practices at operating level, building system requirements which support the SC configuration, processes and practices, and finally selecting appropriate systems. At Level 3, inputs, outputs, and the basic logic flow of process elements are captured (Huanga et al., 2005; Loebbecke, 2005; Poluha, 2007; Supply Chain Council, 2007). In this chapter, the analysis of the performances of RFID technology application will be conducted according to Level 3 metrics with the purpose of evaluating at an operational level the effects of RFID on SC management. 3.2. The Application of the Framework Proposed According to the literature analysis proposed in Section 2, a more detailed analysis is proposed in this chapter. A sample of analyzed academic studies — i.e., 14
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
748 V. Elia et al.
papers — are then analyzed according to the SCOR model previously described. This sample of papers has been selected from the total number analyzed in Section 2 based on results proposed in each single paper. The aim is to verify how the SCOR model could be functional in highlighting the benefits and performances due to the introduction of RFID technology in an SC. Key performance indexes (KPI) concerns of process costs and performances have been applied to the SCOR main processes (plan, source, make, deliver, and return) or according to the five performance attributes (reliability, responsiveness, flexibility, costs, and assets) introduced previously. The proposed framework is based on 28 indicators: these were selected by a set of 55 KPI proposed by Poluha (2007). The selection process was based on results obtained through the literature review developed in Section 2. An initial screening of KPIs has been carried out, considering the main applications of RFID in the SC, which emerged from the literature analysis. The final selection of indicators is reported in the table below. In detail, main processes classification of measures have been applied for the analysis, which are reported in Table 4.
3.3. Results and Discussion The framework proposed — based on 28 indicators — has been tested on the sample of scientific papers evaluated previously. Figure 7 reports the number of occurrences of KPI grouped by its main processes. Results highlights that a large number of academic studies (about 86%) focus on RFID application for the DELIVER activities: these studies are mainly focused on RFID application for increasing effectiveness of inventory management both in warehouse and in retail sectors. Analysis of the sample evaluated — i.e., 14 papers — has shown that the most commonly recurring KPIs are “Average order-to-shipment lead time” and “Perfect order rate” of Deliver SELL processes, which could be applied to evaluate the impact of RFID in the SC for papers Jeffery et al. (2008) and Shih et al., (2005), respectively; “Cycle count accuracy percentage” and “Warehousing and/or inventory management cost per FTE” of Deliver STORE process which could be applied in evaluating the impact of RFID in the SC for 6 papers (as shown in Fig. 8). KPIs evaluation highlights that RFID can be used to improve performance, redesigning work processes to eliminate steps, causes of rework, and errors and wasted time. With RFID, it is possible to speed activities up by automating manual tasks, implementing appropriate procedures for the receipt, inspection, put-away, and picking-of received good, and also by addressing excess labor costs. In Table 5 the details of KPIs identified for each paper analyzed are reported. Papers which are characterized by the highest number of indicators evaluated for assessing the impact of RFID technology are papers Kim et al. (2008c) and Chow et al. (2006b). Other papers in the sample are characterized by an average of five KPIs, while for the two papers Shih et al. (2005) and Vrba et al. (2008) KPIs have been evaluated respectively.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
RFID Applications in the Supply Chain 749
Table 4. Main process
Notation
SCOR performance measures. Performance measures
Source
S1
Average purchase requisition to delivery cycle time
Make
M1
Average machine availability rate
M2
Average manufacturing cycle time
M3
Average Master Production Schedule (MPS) plant delivery performance (work orders) Average throughput per FTE∗ Order management cost as a percentage of revenue
M4 Deliver: sell
Deliver: transport
Dr1
Dr2
Customer retention rate
Dr3
Customer disputes
Dr4
Perfect order rate
Dr5
Lines on-time fill rate
Dr6 Dr7 Dt1
Backorder value Average order-to-shipment lead time Transportation cost
Dt2
Damaged shipments
Dt3
Outbound transportation cost per customer order
Dt4
Inbound transportation cost per supplier order
Definition Average processing time from the creation of a purchase order to good delivery to the customer It does not consider breakdowns, material shortage, and turning time Average hours from beginning to end of manufacturing Percentage of work orders delivered on-time
Average value of finished goods Total order management costs/total organizational revenue Percentage of customers retained Percentage of customer orders disputed Percentage of orders filled on time and completed Percentage of lines in a customer order that were filled on time and complete Value of stocked orders Average hours from order to delivered date Percentage of revenue consumed by the transportation cost Percentage of shipments due to damaged goods Total outbound transportation costs divided by the total number of customer orders Total inbound transportation costs divided by the total number of supplier orders (Continued)
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
750 V. Elia et al.
Figure 7. KPI Occurrences of KPI versus main process type obtained for the sample evaluated.
The first paper has been classified in Section 2 as general Issue (first level), strategic context (second level), retail (third level), and the second paper has been classified as architectural (first level), SC design (second level), logistic (third level). Both papers present very interesting results. In detail, Kim et al. (2008) proposed a model aiming to evaluate the perception of RFID benefits among retailers in the United States and Korea. The proposed study aims to discuss commonality or difference in the perceptions of RFID application as an effective technological infrastructure and as business strategic issue comparing case studies of U.S. and Korean retailers. Results proposed by this paper have several impacts on SC management; this is mainly due to the strategic point of view proposed by the authors. Instead, Chow et al. (2006b) have proposed an RFID-based resource management system (called RFID-RMS) with case-level tagging and a new customized route-optimizing programming model, using real time data of an RFID tag to solve the order-picking problems of material-handling equipment. The aim of this system is to maximize both the efficient use and productive allocation of warehouse material-handling equipment at the lowest operating cost. The authors propose a case study based on a multinational logistics company. Results proposed in this paper are characterized by a general validity even if they are mainly focused on warehouse activities. 4. Conclusion RFID now represents emerging technology in the business process. Therefore, RFID application is usually affected by such a criticism: one main issue is focused on effectively evaluating the potential impact and benefits of an RFID-enabled SC, as
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
RFID Applications in the Supply Chain 751
Figure 8.
Occurrences of KPI in the sample of papers analyzed.
analogous innovative ICT technologies. In recent years several studies have been developed in scientific literature aiming to propose RFID application in SC management in several industrial contexts. A lack of knowledge could be seen in defining the standardized guidelines for an analysis of RFID technology applications in full-scale industrial processes and in looking at the benefits versus any negative points in its application. Therefore, a general framework based on the SCOR model has been provided for evaluating the competitive advantage potentially created by an RFID application in SC management. The general framework proposed has been applied for analyzing papers found in scientific databases from 2000 to 2008.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
752 V. Elia et al.
Table 4. Main process
Notation
Deliver: store
Ds1
Ds2
Ds3 Ds4 Ds5
Ds6
Ds7 Ds8
(Continued)
Performance measures Warehousing and/or inventory management cost Warehousing and/or inventory management cost Average inventory turnover Inactive inventory percentage Inventory obsolescence cost Warehousing and/or inventory management cost per customer order Cycle count accuracy percentage Average received finished good turnaround time
Ds9
Inventory stock-out † percentage
Ds10
Average order line items picked per hour per worker
Ds11
Average warehousing space utilization Warehousing and/or inventory management cost per FTE
Ds12
Definition Percentage of revenue accounted for warehousing and inventory manag. Percentage of physical inventory value for warehousing and inventory manag. Cost of finished goods average inventory Percentage of finished goods stock keeping units (SKUs) Percentage of revenue accounted for obsolete and damaged finished goods. Costs for processing a customer order Inventory accuracy rate Hours for newly finished goods inventory (either from manufacturing operations or suppliers) Percentage of finished goods SKUs that incurred in stock-outs and customer backorders It considers the activity of identifying, removing from the inventory location, and final assembly of the total quantity of the line item ordered Percentage of total warehouse space available that is used Unitary resource cost of warehousing and inventory management
Source: Poluha (2007). ∗ One full-time equivalent employee (FTE) is equal to 40 h of work in a week. †A backorder is an unfilled request for warehouse stock. A stock-out is the inability to fill a supply requisition from the stock.
March 15, 2010 14:46 WSPC/Trim Size: 9.75in x 6.5in
Table 5.
Performance measures identified for the academic studies analyzed.
Sl Ml M2 M3 M4 Ds1 Ds2 Ds3 Ds4 Ds5 Ds6 Ds7 Ds8 Ds9 Ds10 Ds11 Ds12 Dt1 Dt2 Dt3 Dt4 Dr1 Dr2 Dr3 Dr4 Dr5 Dr6 Dr7 X X X
X X
X
X
X
X
X X
X
X X
X X X
X X
X
X
X X
X
X X
X X
X X
X
X
X
X
X
X
X X
X X X X X
X X
X
X X
X X
X
X
X
X X
X
X
X
X X
X
X X X
X
X
X
X
X
X X
b778-ch31
X
SPI-b778
X
X
X
X
X
X X
RFID Applications in the Supply Chain 753
Yoo et al. (2008) Kim et al. (2008) U¸ckun et al. (2008) Wang et al. (2007) Kim et al. (2008) Bottani and Rizzi (2008) Chow et al. (2007) Doerr et al. (2006) Song et al. (2006) Chow et al. (2006b) Lee et al. (2006) Fosso-Wamba et al. (2008) Hou and Huang (2006) K¨arkk¨ainen (2003)
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
754 V. Elia et al.
The bibliometric review proposed in the paper has made it possible to classify papers according to paper type and main topics. This activity has led to the definition of a general framework for evaluating the performances of RFID technology applied in SC management. The proposed framework aims to identify metrics (such as KPIs) that are valuable in tracing the impact of RFID technology both in single organizations and in the whole SC. The results obtained have shown major trends of RFID application in warehouse and retail activity — the Store and Sell processes in the SCOR model proposed. Moreover, the analysis has highlighted specific industrial sectors, where RFID is widely applied — such as food and logistics — and specific contexts — such as construction — where there has been an increasing interest in RFID technology.
References Abad, E, S Zampolli, S Marco, A Scorzoni, B Mazzolai, A Juarros, D G´omeza, I Elmi, G Cardinali, JM G´omez Francisco Palacio, M Cicioni, A Mondini, T Becker and I Sayhan (2007). Flexible tag microlab development: Gas sensors integration in RFID flexible tags for food logistic. Sensors and Actuators B, 127, 2–7. Angeles, R (2007).An empirical study of the anticipated consumer response to RFID product item tagging. Industrial Management & Data Systems, 107(4), 461–483. Barut, M, R Brown, N Freund, J May and E Reinhart (2006). RFID and corporate responsibility: Hidden costs in RFID implementation. Business and Society Review, 111, 287–303. Bean, L (2006). RFID: Why the worry? The Journal of Corporate Accounting & Finance, 17(5), 3–13. Bendavid, Y, E Lefebvre, LA Lefebvre and S Fosso-Wamba (2009). Key performance indicators for the evolution of RFID-enabled B2B eCommerce applications: The case of a five-layer supply chain. Information Systems and E-Business Management, 7(1), 1–20. Bendavid, Y, E Lefebvre, LA Lefebvre and S Fosso-Wamba (2009). Exploring the impact of RFID technology and the EPC network on mobile B2B eCommerce: A case study in a retail industry. Information Systems E-Business Management (in press). Bendoly, E, A Citurs and B Konsynski (2007). Internal infrastructural impacts on RFID perceptions and commitment: Knowledge, operational procedures, and informationprocessing standards. Decision Sciences, 38(3), 423–449. Bohn, J (2008). Prototypical implementation of location-aware services based on a middleware architecture for super-distributed RFID tag infrastructures. Personal and Ubiquitous Computing, 12, 155–166. Bottani, E andA Rizzi (2008). Economical assessment of the impact of RFID technology and EPC system on the fast-moving consumer goods supply chain. International Journal of Production Economics, 112, 548–569. Brignone, C, T Connors, M Jam, G Lyon, G Manjunath, A McReynolds, S Mohalik, I Robinson, C Sayers, C Sevestre, J Tourrilhes and V Srinivasmurthy (2007). Real time asset tracking in the data center. Distributed Parallel Databases, 21, 145–165.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
RFID Applications in the Supply Chain 755
Chande, A, S Dhekane, N Hemachandra and N Rangaraj (2005). Perishable inventory management and dynamic pricing using RFID technology. Sadhana, 30, (Pt. 2&3), 445–462. Chao, C, J Yang and W Jen (2007). Determining technology trends and forecasts of RFID by a historical review and bibliometric analysis from 1991 to 2005. Technovation, 27, 268–279. Chen, J, M Chen, C Chen and Y Chang (2007). Architecture design and performance evaluation of RFID object tracking systems. Computer Communications, 30, 2070– 2086. Chow, HKH, KL Choy and WB Lee (2006a). A dynamic logistics process knowledge-based system; An RFID multi-agent approach. Knowledge-Based Systems, 30, 561–576. Chow, HKH, KL Choy, WB Lee and KC Lau (2006b). Design of a RFID case-based resource management system for warehouse operations. Expert Systems With Applications, 30, 561–576. Clarke, RH, D Twede, JR Tazelaar and KK Boyer (2006). Radio frequency identification (RFID) performance: The effect of tag orientation and package content. Packaging Technology and Science, 19, 45–54. Connolly, C (2007). Sensor trends in processing and packaging of foods and pharmaceuticals. Sensor Review, 27(2), 103–108. Coronado Mondragon, E, AC Lyons, Z Michaelides and DF Kehoe (2006). Automotive supply chain models and technologies: A review of some latest developments. Journal of Enterprise Information Management, 19(5), 551–562. Curtin, J, RJ Kauffman and FJ Riggins (2007). Making the “MOST” out of RFID technology: A research agenda for the study of the adoption, usage and impact of RFID. Information and Technology Management, 8, 87–110. de Kok, AG, KH van Donselaar and T van Woensel (2008). A break-even analysis of RFID technology for inventory sensitive to shrinkage. International Journal of Production Economics, 112, 521–531. Doerr, KH, WR Gates and JE Mutty (2006). A hybrid approach to the valuation of RFID/MEMS technology applied to ordinance inventory. International Journal of Production Economics, 103, 726–741. Domdouzis, K, B Kumar and C Anumba (2007). Radio-Frequency Identification (RFID) applications: A brief introduction. Advanced Engineering Informatics, 21, 350–355. Ergen, E, B Akinci and R Sacks (2007). Life-cycle data management of engineered-to-order components using radio frequency identification. Advanced Engineering Informatics, 21, 356–366. Folinas, D, I Manikas and B Manos (2006). Traceability data management for food chains. British Food Journal, 108(8), 622–633. Fosso Wamba, S, LA Lefebvre, Y Bendavid and E` Lefebvre (2008). Exploring the impact of RFID technology and the EPC network on mobile B2B eCommerce: A case study in the retail industry. International Journal of Production Economics, 112(2), 614–629. Glasser, D, K Goodman and N Einspruch (2007). Chips, tags and scanners: Ethical challenges for radio frequency identification. Ethics and Information Technology, 9, 101–109. Goodrum, PM, MA McLaren andA Durfee (2006). The application of active radio frequency identification technology for tool tracking on construction job sites. Automation in Construction, 15, 292–302.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
756 V. Elia et al.
He, Y, J Hu and H Min (2008). An ultra low-voltage, low-power baseband-processor for UHF RFID tag. Frontiers of Electrical and Electronic Engineering in China, 3(1), 99–104. Higuera, AG and A Cenjora Montalvo (2007). RFID-enhanced multi-agent based control for a machining system. International Journal of Flexible Manufacturing Systems, 19, 41–61. Hou, J and C Huang (2006). Quantitative performance evaluation of RFID applications in the supply chain of the printing industry. Industrial Management & Data Systems, 106(1), 96–120. Huang, GQ, YF Zhang and PY Jiang (2007a). RFID-based wireless manufacturing for walking-worker assembly islands with fixed-position layouts. Robotics and ComputerIntegrated Manufacturing, 23, 469–477. Huang, GQ, YF Zhang and PY Jiang (2007b). RFID-based wireless manufacturing for realtime management of job shop WIP inventories. Robotics and Computer-Integrated Manufacturing, 23, 469–477. Huanga, SH, SK Sheoranb and H Keskar (2005). Computer-assisted supply chain configuration based on supply chain operations reference (SCOR) model. Computers and Industrial Engineering, 48, 377–394. Jedermann, R, C Behrens, D Westphal and W Lang (2006). Applying autonomous sensor systems in logistics-Combining sensor networks, RFIDs and software agents. Sensors and Actuators A, 132, 370–375. Jeffery, S, M Franklin and M Garofalakis (2008). An adaptive RFID middleware for supporting metaphysical data independence. The VLDB Journal, 17, 265–289. Jones, P, C Clarke-Hill, D Hillier and D Comfort (2005). The benefits, challenges and impacts of radio frequency identification technology (RFID) for retailers in the UK. Marketing Intelligence & Planning, 23(4), 395–402. Kahhat, R, J Kim, M Xu, B Allenby, E Williamsa and P Zhang (2008). Exploring e-waste management systems in the United States. Resources, Conservation and Recycling, 52, 955–964. Karkkainen, M (2003). Increasing efficiency in the supply chain for short shell life goods using RFID tagging. International Journal of Retail and Distribution Management, 31, 529–536. K¨arkk¨ainen, M and J Holmstrom (2002). Wireless product identification: Enable for handling efficiency, customisation and information sharing. Supply Chain Management, 7, 242–252. Kelepouris, T, K Pramatari and G Doukidis (2007). RFID-enabled traceability in the food supply chain. Industrial Management & Data Systems, 107(2), 183–200. Kelly, EP and GS Erickson (2005). RFID tags: Commercial applications v. privacy rights. Industrial Management & Data Systems, 105(6), 703–713. Keskilammi, M, L Syd¨anheimo and M Kivikoski (2003). Radio frequency technology for automated manufacturing and logistics control. Part 1: Passive RFID systems and the effects of antenna parameters on operational distance. International Journal of Advanced Manufacturing and Technologies, 21, 769–774. Kim, M, C Kim, SR Hong and I Kwon (2008a). Forward–backward analysis of RFIDenabled supply chain using fuzzy cognitive map and genetic algorithm. Expert Systems with Applications, 35(3), 1166–1176.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
RFID Applications in the Supply Chain 757
Kim, J, K Tang, S Kumara, ST Yee and J Tew (2008b). Value analysis of location-enabled radio-frequency identification information on delivery chain performance. International Journal of Production Economics, 112(1), 403–415. Kim, EY, E Ko, H Kim and CE Koh (2008c). Comparison of benefits of radio frequency identification: Implications for business strategic performance in the U.S. and Korean retailers. Industrial Marketing Management, 37(7), 797–806. Knospe, H and H Pohl (2004). RFID security. Mobile Security, 9(4), 1363–4127. Konomi, S and G Roussos (2007). Ubiquitous computing in the real world: Lessons learnt from large scale RFID deployments. Personal and Ubiquitous Computing, 11, 507–521. Kumar, S and EM Budin (2006). Prevention and management of product recalls in the processed food industry: A case study based on an exporter’s perspective. Technovation, 26, 739–775. Lai, F, J Hutchinson and G Zhang (2005). Radio frequency identification (RFID) in China: Opportunities and challenges. International Journal of Retail & Distribution Management, 33(12), 905–916. Lee, U, K Kang, G Kim and H Cho (2006). Improving tower crane productivity using wireless technology. Computer-Aided Civil and Infrastructure Engineering, 21(8), 594– 604. Lee, LS, KD Fiedler and JS Smith (2008). Radio frequency identification (RFID) implementation in the service sector: A customer-facing diffusion model. International Journal of Production Economics, 112, 587–600. Li, D, D Kehoe and P Drake (2006a). Dynamic planning with a wireless product identification technology in food supply chains. International Journal of Advanced Manufacturing and Technologies, 30, 938–944. Li, S, JK Visich, BM Khumawala and C Zhang (2006b). Radio frequency identification technology: Applications, technical challenges and strategies. Sensor Review, 26(3), 193–202. Lockton, V and RS Rosenberg (2005). RFID: The next serious threat to privacy ethics and information technology. Ethics and Information Technology, 7, 221–231. Loebbecke, C (2005). RFID technology and applications in the retail supply chain: The early metro group pilot. In Paper Presented at the 18th Bled eConference eIntegration in Action, Bled, Slovenia. Luckett, D (2004). The supply chain. BT Technology Journal, 22(3) 50–55. McMeekin, TA, J Baranyi, J Bowman, P Dalgaard, M Kirk, T Ross, S Schmid and MH Zwietering (2006). Information systems in food safety management. International Journal of Food Microbiology, 112, 181–194. Mourtzis, D, N Papakostas, S Makris, V Xanthakis and G Chryssolouris (2008). Supply chain modeling and control for producing highly customized products. CIRP Annals — Manufacturing Technology, 57, 451–454. Mousavi, A, M Sarhadi, A Lenk and S Fawcet (2002). Tracking and traceability in the meat process industry. British Food Journal, 104(1), 7–19. Ngai, EWT, TCE Cheng, S Au and KH Lai (2007). Mobile commerce integrated with RFID technology in a container depot. Decision Support Systems, 43, 62–76. Ngai, EWT, FFC Suk and SYY Lo (2008a). Development of an RFID-based sushi management system: The case of a conveyor-belt sushi restaurant. International Journal of Production Economics, 112, 630–645.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
758 V. Elia et al.
Ngai, EWT, KKL Moon, FJ Riggins and CY Yi (2008b). RFID research: An academic literature review (1995–2005) and future research directions. International Journal of Production Economics, 112, 630–645. O’Leary, E (2008). Supporting decisions in real-time enterprises: Autonomic supply chain systems. Information Systems E-Bus Management, 6, 239–255. Palsson, H (2007). Participant observation in logistics research. Experiences from an RFID implementation study. International Journal of Physical Distribution & Logistics Management, 37(2), 148–163. Parlikad, AK and D McFarlane (2007). RFID-based product information in end-of-life decision making. Control Engineering Practice, 15, 1348–1363. Patil, A, J Munson, D Wood and A Cole (2008). Bluebot: Asset tracking via robotic location crawling. Computer Communications, 31, 1067–1077. Penttil¨a, K, M Keskilammi, L Syd¨anheimo and M Kivikoski (2006). Radio frequency technology for automated manufacturing and logistics control. Part 2: RFID antenna utilization in industrial applications. International Journal of Advanced Manufacturing and Technologies, 31, 116–124. Peslak, AR (2005). An ethical exploration of privacy and radio frequency identification. Journal of Business Ethics, 59, 327–345. Peyret, F and R Tasky (2004). A traceability system between plant and work site for asphalt pavements. Computer-Aided Civil and Infrastructure Engineering, 19, 54–63. Poluha, RG (2007). Application of the SCOR Model in Supply Chain Management. Cambria Press. Prater, E, GV Frazier and PM Reyes (2005). Future impacts of RFID on e-supply chains in grocery retailing. Supply Chain Management: An International Journal, 10(2), 134–142. Qiu, RG (2007). RFID-enabled automation in support of factory integration. Robotics and Computer-Integrated Manufacturing, 23, 677–683. Ranky, PG (2006). An introduction to radio frequency identification (RFID) methods and solutions. Assembly Automation, 26(1), 28–33. Regattieri, A, M Gamberi and R Manzini (2007). Traceability of food products: General framework and experimental evidence. Journal of Food Engineering, 81, 347–356. Rekik,Y, Z Jemai, E Sahin andY Dallery (2007). Improving the performance of retail stores subject to execution errors: Coordination versus RFID technology. OR Spectrum, 29, 597–626. Rekik, Y, E Sahin and Y Dallery (2008). Analysis of the impact of the RFID technology on reducing product misplacement errors at retail stores. International Journal of Production Economics, 112, 264–278. R¨omer, K, T Schoch, F Mattern and T D¨ubendorfer (2004). Smart identification frameworks for ubiquitous computing applications. Wireless Networks, 10, 689–700. Roussos, G and T Moussouri (2004). Consumer perceptions of privacy, security and trust in ubiquitous commerce. Personal and Ubiquitous Computing, 8, 416–429. Saygin, C (2007). Adaptive inventory management using RFID data. International Journal of Advanced Manufacturing and Technologies, 32, 1045–1051. Scott Erickson, G and EP Kelly (2007). International aspects of radio frequency identification tags: Different approaches to bridging the technology/privacy divide. Knowledge Technology & Policy, 20, 107–114.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
RFID Applications in the Supply Chain 759
Sellitto, C, S Burgess and P Hawking (2007). Information quality attributes associated with RFID-derived benefits in the retail supply chain. International Journal of Retail & Distribution Management, 35(1), 69–87. Sheffi, Y (2004). RFID and the innovation cycle. The International Journal of Logistic Management, 15(1). Shih, D, P Sun and B Lin (2005). Securing industry-wide EPCglobal network with WSsecurity. Industrial Management & Data Systems, 105(7), 972–996. Singh, SP, M McCartney, J Singh and R Clarke (2008). RFID research and testing for packages of apparel, consumer goods and fresh produce in the retail distribution environment. Packaging Technology and Science, 21, 91–102. Smith, AD (2005). Exploring radio frequency identification technology and its impact on business systems. Information Management & Computer Security, 13(1), 16–28. Sommerville, J and N Craig (2005). Intelligent buildings with radio frequency identification devices. Structural Survey, 23(4), 282–290. Song, J, CT Haas, C Caldas, E Ergenb and B Akinci (2006). Automating the task of tracking the delivery and receipt of fabricated pipe spools in industrial projects. Automation in Construction, 15, 166–177. Song, J, CT Haas and CH Caldas (2007). A proximity-based method for locating RFID tagged objects. Advanced Engineering Informatics, 21, 367–376. Soppera, A and T Burbridge (2005). Wireless identification-privacy and security. BT Technology Journal, 23(4) 54–64. Spekman, RE and PJ Sweeney II (2006). RFID: From concept to implementation. International Journal of Physical Distribution & Logistics Management, 36(10), 736–754. Spieß, P, C Bornhovd, T Lin, S Haller and J Schaper (2007). Going beyond autoID: A service-oriented smart items infrastructure. Journal of Enterprise Information Management, 20(3), 356–370. Supply Chain Council (2007). Supply-Chain Operations Reference-Model. SCOR Overview Version 8.0. Szmerekovsky, JG and J Zhang (2008). Coordination and adoption of item-level RFID with vendor managed inventory. International Journal of Production Economics, 114, 388–398. Tajima, M (2007). Strategic value of RFID in supply chain management. Journal of Purchasing & Supply Management, 13, 261–273. Thiesse, F and E Fleisch (2007). On the value of location information to lot scheduling in complex manufacturing processes. Journal of Food Engineering, 81, 347–356. Thompson, M, G Sylvia and MT Morrissey (2005). Seafood traceability in the United States: Current trends, system design, and potential applications. Comprehensive Reviews in Food Science and Food Safety, 1, 1–7. Twist, DC (2005). The impact of radio frequency identification on supply chain facilities. Journal of Facilities Management, 3(3), 226–239. U¸ckun, C, F Karaesmen and S Savas (2008). Investment in improved inventory accuracy in a decentralized supply chain. International Journal of Production Economics, 113, 546–566. Vijayaraman, BS and BA Osyk (2006). An empirical study of RFID implementation in the warehousing industry. The International Journal of Logistics Management, 17(1), 6–20.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
760 V. Elia et al.
Vrba, P, F Mac˙urek and V Maˇrk (2008). Using radio frequency identification in agentbased control systems for industrial applications. Engineering Applications of Artificial Intelligence. doi:10.1016/j.engappai.2008.01.008. Wang, L (2008). Enhancing construction quality inspection and management using RFID technology. Automation in Construction, 17, 467–479. Wang, L, Y Lin and PH Lin (2007a). Dynamic mobile RFID-based supply chain control and management system in construction. Advanced Engineering Informatics, 21, 377–390. Wang, SJ, SF Liu and WL Wang (2007b). The simulated impact of RFID-enabled supply chain on pull-based inventory replenishment in TFT-LCD industry. International Journal of Production Economics, 21, 377–390. Warren, PW (2004). From ubiquitous computing to ubiquitous Intelligence. BT Technology Journal, 22(2), 28–38. Wright, S and A Steventon (2004). Intelligent spaces — The vision, the opportunities and the barriers. BT Technology Journal, 22(3) 15–26. Wu, NC, MA Nystrom, TR Lin and HC Yu (2006). Challenges to global RFID adoption. Technovation, 26, 1317–1323. Wyld, DC (2006). RFID 101: The next big thing for management. Management Research News, 29(4), 154–173. Xiao,Y, SYu, K Wu, Q Ni, C Janecek and J Nordstad (2007). Radio frequency identification: Technologies, applications, and research issues. Wireless Communications and Mobile Computing, 7, 457–472. Yam, KL, PT Takhistov and J Miltz (2005). Intelligent packaging: Concepts and applications. Journal of Food Science, 70(1) 1–10. Yoo, JS, SR Hong and CO Kim (2008). Service level management of non-stationary supply chain using direct neural network controller. Expert Systems with Applications, in press An International Journal, 36(2), 3574–3586. Yun, K and D Kim (2007). Robust location tracking using a dual layer particle filter. Pervasive and Mobile Computing, 3, 209–232. Zampolli, S, I Elmi, E Cozzani, G Cardinali, A Scorzoni, M Cicioni, S Marco, F Palacio, J Gomez-Cama, I Sayhan and T Becker (2008). Ultra-low-power components for an RFID Tag with physical and chemical sensors. Microsystem Technology, 14, 581–588. Zhou, S, W Ling and Z Peng (2007). An RFID-based remote monitoring system for enterprise internal production management. International Journal of Advanced Manufacturing and Technologies, 33, 837–844. Song, J, CT Haas, C Caldas, E Ergen and B Akinci (2006). Automating the task of tracking the delivery and receipt of fabricated pipe spools in industrial projects. Automation in Construction, 15, 166–177.
Biographical Notes Valerio Elia is an assistant professor of Business and Firm Organization at the University of Salento. His research interests are focused on business innovation and e-government. He has also been managing director of ISUFI (the High School of University of Salento) for research and innovation policy. He has been involved
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
RFID Applications in the Supply Chain 761
in several industrial projects in the field of innovation with FIAT Engineering, ST Microelectronics, Apulia region. Maria Grazia Gnoni is an assistant professor at the University of Salento. In 2003, she received her PhD in Advanced Production Systems Engineering at the Polytechnic of Bari, Italy. Her research interests are focused in optimization models for supply chain management and logistics. Alessandra Rollo received her magister degree cum lodem in Management Engineering from University of Salento, Italy in 2007. After a brief experience as a SAP consultant, she has joined the Department of Engineering for Innovation at the University of Salento. Her research interests are in the fields of operations management and decision sciences, with a specialization in ERP and IT utilization.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch31
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Part VI Tools for the Evaluation of Business Information Systems
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Chapter 32
Tools for the Decision-Making Process in the Management Information System of the Organization CARMEN DE PABLOS HEREDERO∗ ´ and MONICA DE PABLOS HEREDERO† Business Administration Department Rey Juan Carlos University, Social Sciences Faculty Paseo de los Artilleros, s/n 28032 Madrid, Spain ∗ [email protected] † [email protected]
The information-based economy has promoted a greater competition among firms. Information and communication technologies have become one of the major factors influencing the decision-making process in firms operating in a global context. A series of information and communication technologies can assist with different types and levels of the decisionmaking process in organizations. The types of decisions facing businesses vary considerably. The main objective of the chapter is to describe how different types of information and communication technologies can be applied to improve the decision-making process in the area of management information systems in organizations. We are going to offer examples by applying a case study methodology in firms using the above-mentioned information and communication technologies. Keywords: Information system; information and communication technologies; decisionmaking; integrated management systems; database management system; enterprise resource planning; customer relationship management.
1. Introduction The types of decisions facing businesses vary considerably. Structured decisions are repetitive, routine decisions. Unstructured decisions are non-routine ones. There is no universal way for producing unstructured decisions. Between these two types, we can find “semi-structured decisions.” In these last ones, only a part of the problem has a clear answer according to a recognized procedure. According to their nature, we can find different groups of information and communication technologies that can help in the decision-making process for different kinds of decisions in organizations. For example, data warehousing and enterprise resource planning (ERP) 765
March 15, 2010
766
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
systems help share information essential for structured decisions. Data mining and artificial intelligence techniques are essential tools for identifying the trends. Management information systems (MIS) and some types of decision support systems (DSS) help decision-making in unstructured strategic level decisions. The main objective of the chapter is to describe how different types of information and communication technologies can be applied to improve the decision-making process in the area of MIS in the organizations. We are going to offer examples by applying a case study methodology in firms using the above-mentioned information and communication technologies. We are going to analyze two types of relationships: • On the one hand, we are going to describe how some information and communication technologies can help in deciding the implementation of an information system in different areas of an organization. • On the other hand, we are going to describe how the different organizations can better optimize the data stored in their information systems by manipulating them with different information and communication technologies. 2. Concept of Information System Information is one of the most important and strategic assets in firms (Eldon, 1997, Shapiro and Varian, 1999; Clemons and Hitt, 2004; Erden et al., 2008). The working of the whole organization depends heavily on the proper supplying processing, and managing of information. Firms need to process information to create knowledge for the decision-making process (Nonaka, 1995; De Pablos, 2006; Soret et al., 2008). Both information and knowledge are the tools that allow the organizations to know about the needs of the society, the dynamics of the competences and the opportunities to explore, the allies in the market, and their workers and needs, etc. Then, a proper flow of information in the firms is required. To get information, outside or inside the firm, and use it efficiently is not easy. Therefore, the organizations need to count on a group of material, human, and technical materials properly organized, coordinated, and integrated in the structure of the organization and aligned with the corporation’s strategy. This group of resources is the firm’s information system. The firm’s information system is not just designed and implemented in the organization to manage information and knowledge, but is also a means to improve the firm’s processes and, lastly to create value (Turban et al., 2007). An information system would be more efficient in the case that it is able to improve the business processes and decision-making in the firm by offering a high return at a low cost (Laudon and Laudon, 2004). For this reason, an information system that does not offer knowledge or information for the decision-making process must not be maintained in the firm.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 767
From a technical point of view and according to a systemic focus, an information system is a group of organized and dynamically interrelated resources that allow the processing of the information for the users to make decisions and execute their functions to reach the firm’s objectives (Parker and Case, 1993). An information system then realizes three main functions for satisfying the firm’s information needs: • Capturing and collecting external and internal data. • Processing the data. • Distributing the resulting information. As an example, for a company operating in the fashion industry with distributed places for sales in different countries, information coming from different sources in multiple systems were late processed in the firm’s data warehouse. It generates reports that are used by a variety of departments involved in the decision-making process at different levels. The systems are mainly: • A corporative ERP system, where different financial, logistical and production processes are managed. • A CRM for the management of the customer’s loyalty. • An Extranet for the management with providers. • Office software as Excel for planning purposes. • Management systems for the devices in the point of sales (TPV). • Systems that control the level of visits in the selling points. • External information coming from studies about the potential market, to compare the situation of the company today with the industry. If only the information coming from any of the mentioned systems was analyzed, we could not count on it with a complete vision of what is happening in the different areas. For that reason, it is important to collect information coming from the various areas, process it, and generate a group of reports where different ratios combining the information contained in the different systems are obtained. The kind of information contained in the report could be (Fig. 1): • Flow of customers: the data about people visiting points of sales and the real sales can be measured to know the number of people interested in the product and how many really buy it. • A measure of the sales in a period in comparison with the competence. • The impact of marketing and global royalties on sales. • The evaluation of the providers according to a group of criteria such as quality, services, etc. • Real sales results in comparison with the planning. Besides, as any other system that evolves in time in a controlled way, it provides the feedback that allows correcting and improving all the system functions.
March 15, 2010
768
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
Figure 1. An information system.
The way in which the information system realizes its functions defines its operability. For example, we could find in the same information system, a formal part that would allow the processing of salaries, accessed by operative workers with no managerial responsibility; and an informal part containing work team tools that allow the transmission of knowledge among managers at business units; or even manual systems to develop simple processes. All of them are valid, they cohabit inside the organization and are part of the firm’s information system. This variety of information systems produces misunderstandings when we try to classify them (Walsham, 1993). Furthermore, we consider the information system a part of the organization that it belongs to. In this chapter, we offer the concept of information system in firms and we describe the relationship among the concept and the organization it belongs to. We also offer the concept of information and communication technology by offering a reflection on the role that this concept plays in the information systems in organizations. With the main objective of looking for an approach to identify and analyze the possible information systems that we can find in firms, a group of information
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 769
systems are exposed. We offer a view that allows us to classify the information systems for decision-making in firms according to two different perspectives. The first one, is based on the evolution of information systems and their answer to the needs of the managerial levels. The second, more modern one is based on the principles of integration and coordination. According to this second perspective, we identify two of the most common information systems in evolution in firms today: the ERP and the customer relationship management (CRM).
3. The Information System and the Organization Information systems are a part of the organization. In fact, the elements that shape them are practically the same technical, material and human resources, methods and procedures that take part in the development of the processes (Powell and Dent-Micallef, 1997). This is even more evident in the case of many new firms highly supported by the information technology (IT). The technology has deeply impacted many of the business processes by offering high levels of efficiency and dependency (Sethi and King, 1994; Davenport et al., 2008). In these firms, it is even more difficult to separate the information system from the organizational structure; there is a complex relationship between the organization and the technology. Both are key elements in the information system design. If we try to define the work that many employees develop in firms, it would be quite difficult to distinguish what part of it is dedicated to the processes of information and what deals with the process of transforming goods and services. From a strategic point of view, we can consider the business information system a part of the firm’s infrastructure; and it must be coherent and coordinated with the rest of the systems that shape this infrastructure (Andreu et al., 1993; De Pablos, 2000; Gordon et al., 2008). According to this perspective, the alignment of the information system with the business objectives it serves is of special importance. Consequently, the firm should develop a strategy of information systems coherent with the firm’s strategy so that a proper use of information systems can proportion benefits to the firm (Gordon and Tarafdar, 2007). In the same way, the design and implementation of innovative information systems will allow the development of new corporative strategies that can generate competitive advantages for firms. If we follow the previous example of a fashion company where the providers (manufacturing points) and the customers (points of sales) are allocated in various countries, they need to incorporate information technologies to manage the whole supply chain. In these companies, the business processes are decentralized and the only way to control all of them is by using sophisticated information systems. The management system is vertically integrated. This offers them an important competitive advantage in terms of flexibility, by allowing firms the possibility of positioning a new product each 2 weeks.
March 15, 2010
770
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
The agility in an organization comes on the one hand from the information system localized in all the firms offering information online and on the other hand, from the system of design that allows the digitalization of the models created, allowing the sending of the information by using a Web technology from the manufacturing places. It allows better response times in the production process. The reposition of the product in between 24 and 48 hours can be offered and the immediate market demand satisfied. New designs and the repetition of products in a short period can be offered. This way, the customer can perceive some kind of exclusive service. From an organizational perspective, we can observe that the information system is related to all the activities that are performed in the firms and these depend mostly on the system. For that reason, any change in the information systems alters the dynamics of the organization and vice versa. For the organizations, the change has become a constant (Rajan and Rajan, 2007). Information systems can constitute true enablers or barriers for the change and therefore for the re-design and the creation of value in the firm’s processes. For that reason, managers must bear in mind how IT will have a key impact in the processes in the organization (Porter and Millar, 1985; Boisot, 1999; Phambuka-Nsimbi, 2008). Even more, we consider that they promote impact and are influenced by the organizational characteristics they belong to: culture, initial structure, procedures, etc. These reflections give rise to a key question in the management of information systems: what their organizational and strategic implications? These relationships will be mentioned later by specially stressing the strategic impact of IT in relation to information systems.
4. The Information System and the Technology Many times, when we refer to information systems and IT in the context of the firm we tend to consider them as the same concept. However, although they are close linked, they are not the same thing. We have previously defined an information system. It is the asset that allows the organization to satisfy its information needs and contributes to the accomplishment of its strategic objectives. Now, we try to define IT and its main role in relation to the information system in the organization. If we start by considering that technology is a group of applied and structured knowledge whose main objective is creating something (De Pablos, 2006), we can consider IT to be the materialization of all this knowledge related to the processing of information. For this reason, we define IT as a means: a group of tools composed by hardware, software, and communication networks solutions (De Pablos, 2004). Let us mention some technologies that are being applied to firms today.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 771
4.1. Radio Frequency Identification (RFID) The identification of using radio frequencies consists of a system that collects data and uses little chips adhered to products. It makes product control easier (from the very first moment it is created until the sales to the final customer) by pointing each process, movement, or any other relevant data in the distribution process. It is a great technological advance to be able to control the product in the different logistic a processes in the whole supply chain. Among the main advantages of the RFID technology, we can stress: • • • •
To obtain the exact information about the inventories in real time. Reduction of the picking and packing times. Control of the whole product in any process that it is involved in. Identification of the providers, where each of the warehoused products come from.
4.2. Example RFID: Protecting Valuable Products During Distribution A bonded warehouse in the United Kingdom stores expensive single malt whiskies, which are subject to theft even by the warehouse employees. Pallets carrying these stocks are also subject to being misplaced, thus delaying on-time deliveries. To prevent these problems, it was necessary to ensure that forklift trucks moving pallets would pass correctly along pre-set routes. Deviations might mean that employees were taking product off to a hiding location intentionally for later theft, or were just misplacing stock. To create this security system, the company built a grid of transponders suspended from the ceiling. The forklift trucks are equipped with RFID readers. Routing details are downloaded to the forklift truck from a central computer via a radio frequency communication link. This includes correct loading location, exact sequence of transponders along the route, and the delivery bay location. If the on-board reader detects deviations, the truck is immobilized and a supervisor is needed to reset the vehicle. Automatic weighing is also used in combination with the system (Fig. 2). 4.3. WI-FI The WI-FI technologies allow the access of information in any place and moment. The main advantages reached by the firms using these technologies are • Ergonomy and comfort • Time savings • Cost savings
March 15, 2010
772
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
Figure 2.
Bonded warehouse.
4.4. Technologies for the Virtualization of Systems Some firms are immersed in a process of systems virtualization by using different technologies. They try to manage a firm’s networks as a platform for collaboration that integrates different business applications, communication devices, and Webbased tools that respect the security policies and the accomplishment of rules. The virtualization of systems tries to help firms accelerate their business processes and increase their productivity. As an example, in the service industry, some pilot applications deal with distance presence, where one can access an “expert on demand” to request advice in a fast way. IT are the basic tools for the information systems, one of their main components. Besides, apart from helping the firm attain its objectives, IT offers to the firm’s information systems the capacity for innovating and promoting a response to changes of different natures. That is to say, IT is an important tool for the information systems and can be considered a strategic resource as they offer a competitive advantage to the firm (Porter and Millar, 1985; Sethi and King, 1994; Powell and Dent-Micallef, 1997; Arkin and Fletcher, 2007). This way, the combination of the elements of IT that an organization decides to acquire or create will constitute the technological platform that can make the firm’s information system a tangible reality. For that reason, each firm will be responsible for deciding and managing what the technological platform is and the objectives it wishes to reach in the firm’s information system.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 773
Obtaining competitive advantages from the application of information and communication technologies is not an easy task (Clemons and Hitt, 2004; Nevan and Basu, 2008). We must take into account that the technology, no matter its properties, if not properly adapted to the organization’s needs (Hammer, 1990), becomes a barrier that will impede the achievement of firms’ objectives and will not offer the expected returns on investment. However, in some occasions, we can find organizations adapting to a group of technologies to survive or maintain their competitive advantages. This demands a proper redesign capacity and change in the organization. The interorganizational systems are one of the many examples showing how IT resources can transform the organization. An interorganizational system is a shared information system among a group of companies. Thanks to the great connectivity that today’s technologies allow, this kind of system can interlink a firm with the main environmental elements, customers, providers, distributors, and partner firms to reach their firm’s objectives (White et al., 2005). 5. The Components and Basic Functions of the Information Systems By taking into account that the information is a strategic resource for the firm, and that IT plays a key role in supporting the competitive strategies in the firm, or in extreme cases, assuring its survival in the market, the firm’s information system acquires also a strategic importance that obliges firms to integrate the development of their system and technology with corporate strategy (Gurbaxani and Whang, 1991; Katz and Rice, 2002; Davenport, 2006). This strategy will be the one that makes explicit information needs. Apart from its strategic character, one of the aspects that best identifies the firm’s information systems is the presence of the firm’s objectives in them. The last objective of any information system consists of satisfying the information needs of the users for which the system has been created. In the case of the firm’s information system, this last objective is concrete in providing information for analysis of problems and the planning, for control and decision-making at all levels of the firm. In this sense, a firm’s information system must (White et al., 2005): • Promote the development and coordination of the rest of the activities in the firm’s value chain. • Contribute to the attainment of the firm’s objectives. • To reach these objectives, an information system must develop the three main functions that we have referred to in the previous paragraph to obtain processes and distribute data and information (Fig. 3).
March 15, 2010
774
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
ENVIRONMENT FIRMS INFORMATION SYSTEM Inputs Data
Process Information
Register coding
Order Classify Convert Analyze Aggregate
Output Information
Transmission
Accumulate
Figure 3. The functions of an information system.
5.1. The Obtaining of Data Obtaining the information needed to be treated afterwards implies counting on the mechanisms that are able to put together, from accurately different available sources, the relevant data, and at a reasonable cost (Cash et al., 1992). We must not forget that today, we count on a great number of information sources. However, sometimes the most part of this information is not available, and in case it is, it is not at zero cost. Then, for the firm and its information system, it is fundamental to count on the infrastructure that offers support to the obtaining of information, as it offers a good quality of primary sources of the data. The information and the external data can be obtained, as we have already mentioned, from the different firm sources in the previous chapters. In the case of internal information, coming from inside of the firm, it would be better that the information systems are properly integrated and that the data are collected and registered once. Often, the same data can be used by various information processes and the duplicity of the register could offer inconsistent information. Two different phases allow the obtaining of the information: the registering and the codifying. The registering is the action that allows obtaining the data in the system. For that, it is needed to choose the proper data in the optimal format. It is here that codification is important. With the codification, we translate the data and the information in a condensed and normalized format that allows better transmission and understanding along the system. That is to say, with proper coding, we are able to ensure a good presentation and registering of the information in the system. Before the initial data is processed, they are warehoused in the system. The process of registering necessarily implies not only the choosing of logical supports, but also physical ones where this information will be maintained. This is the case of the firm’s database systems where the data and main information of the firm remain.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 775
The time that the data are kept will depend on the kind of data we are dealing with. It is probable that the data extracted from automatic teller machines are processed daily, while the customers purchase would be realized monthly. In any case, processing the data does not necessarily imply that they are lost or disappear from the system. Therefore, choosing a proper warehousing tool is one of the most important decisions in information systems design. Besides, with the selection of the systems and methods for warehousing among different alternatives for the firm, we must assure the availability of the information at high levels of security and reliability. 5.2. The Processing Processing the data means manipulating and operating them according to rules, methods, and procedures that offer them significance and more added value in the firm. The results of this processing allow for decision-making, helping the analysis of the new information and even promoting the generation of knowledge inside the firm. In this function, there are some vital elements, such as the software or the applications that allow the processing of data — ordering, classifying, and putting them in relation — and the own users whose knowledge and experience allow these same operations and deliver the value to the firm. 6. The Diffussion Function Transmitting the information obtained to the people and processes that need it are such important issues as are the rest of the basic functions in the system. As far as it is concerned, if the processed information does not reach its destiny in the way and required times, the system will not accomplish its objectives. The realization of the diffusion and transmission of information activities requires proper systems and communication networks that allow reaching of the information of quality to all people who need it in this moment. Once more, assuring an effective and practical distribution process is one of the basic requirements of the information system. For the development of its functions, a firm’s information system based on information and communication technologies counts on the following basic elements: • Technical and material resources. This means the work tools of the system. They are composed by the hardware, the software, and the communication systems. • Human resources. This means the people that operate with the technical means that use the information that it is produced. Today, there are many kinds of persons that in one or other way are linked with the information system: users, system administrators, programmers and analysts, etc. We could affirm that almost all the people integrating the organization are related with the information system or are part of it.
March 15, 2010
776
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
• The information and the data; they are the main assets to explore. There must be a flow of information that circulates in a proper way and arrives every place in the organization. The flow can be warehoused as multiple information processes need to access them for the proper working of the firm. • Procedures, methods, and norms that control the informative processes and that define who makes what with data and information. 7. Types of Information Systems As we have already mentioned, classifying the existent information systems in the firm is not an easy task and it is often the result of looking for a logical classification aimed to identify, recognize, and analyze them. This is due to the great types of information systems appeared in the firms and organizations in general that, at the same time, is mainly motivated for two different factors. On one hand, the search of the automation of processes and the evolution of applied technology has spread new and better systems to substitute the previous ones according to new usages, techniques, and available devices. However, sometimes, besides substituting the previous ones, new ones have also been implemented to answer to the different needs at higher levels of responsibility. On the other hand, the organizations are diverse, as are their information systems. Besides, not all of them evolve in the same way and have the same access to the technology. For these reasons, firms demand different kinds of information systems. Besides, inside an organization, there are different interests, objectives, and activities that differ in their degree of specialization and in the technologies required. In the following part, we try to offer a common vision used when classifying the different information systems. 8. A Classical Focus: The Organizational Pyramid and Functional Areas The proposal of classification that we offer now is summed up in Fig. 4. It is a universally accepted one and is widely used in the area of management of information systems. Besides, it attends to the logical evolution that different systems follow. It is a further step in this evolution that will allow us to speak about the firm’s information systems from a more modern perspective, not just according to the satisfied needs or the basic functions they offer, but more according to the objectives of adaptation to the changing conditions in the environment and competence. 8.1. The Systems at an Operational Level Transaction process systems (TPS) register and allow realizing in an automatic way the daily transactions in a firm that are frequent and repetitive. For this reason, they are essential in business. These systems are the ones first created when the computer
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 777
EIS Strategic management DSS Tactical management
MIS
Operations management
Operational level
EDP
Figure 4. Levels of management and information needs. (Adapted from Laudon and Laudon, 2004.)
was initially introduced in the firms in the 1960s. The main reason was because they are easy to program. The difference in the realization of the manual and automatic transactions was key and their impact, then incredible, translated mainly into cost reductions. It produced an increasing interest in the application of these tools in the management of firms. Besides, they are producers of immediate information, highly precise, and detailed for the rest of the systems and can help managers at a low/medium level to control the elemental and daily activities of a firm. The great parts of the firms today are using computers and communication networks for these systems. 8.1.1. TPV (POS): Point of sale software solution (POS): Point of sale software solution. These store solutions attend to the needs of retailers. Whether retailers operate via company-owned stores, franchises, concessions or independent stores, they can manage and optimize all their back-office functions (stock management, merchandise reception, product search, supplier orders, etc.), and front-office functions (payment processing, CRM, loyalty cards, customer services, promotions, etc.), as well as staff scheduling. As an example of a TPS, we can cite the TPV systems (terminals in the point of sales) used in the retail industry. They mean an evolution of the automatic teller machines, where the vendors directly manage the sales of products to the final customer. Besides, in this kind of system, some other applications performing additional functions that support the management of the sales can be applied: • • • •
Management of the stock at the point of sales Applying for goods Closing of cash Management of discount opportunities and other tickets
March 15, 2010
778
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
8.2. The Systems at a Tactical Level MIS allows the processing of data by offering reports at the tactical level, where the development of plans and programs are developed and more managerial than routinely decisions are taken. They offer aggregated and summed information. It is precise and almost immediately produced, although it depends on how fast the transaction processing systems that supply the system are. They are not very flexible since the reports they offer are quite standardized according to predefined procedures and they lack further capacity for analysis. They imply the logical evolution of the TPS systems. In these last ones, the important thing was the capacity for processing the hardware equipment, producing more operations in less time. In the case of the information systems at a tactical level, the weight is translated to the software by requiring a greater treatment of the data to enable the managerial and planning tasks. Inside this group, we can also find the DSS. They are positioned between the tactical and strategic level. Turban and Aronson (2001) define the DSS as computer-aided information systems that are able to combine models and data to solve non-structured problems by making use of a user-friendly interface. The apparition of these systems takes place when the technical systems become more complex, the environments change more rapidly and, on the user’s level, we find a great availability of more potent technology. This external environment joined to a more comfortable internal environment and (thanks to the previous information systems, many firms have their more routinely tasks solved) changes the focus of the information management. Managers start to look the way in which the technologies can help them process much more information to realize their analysis faster. This way, the DSS systems appear. Some of the characteristics that define these systems are
• They support the semi-structured or less-structured decision-making. As far as the decisions are not clearly specified, they present levels of uncertainty. This makes that the DSS can be used on a tactical or strategic level. • The DSS systems offer to the users the possibility to make the best of the internal and external information warehoused in the database that feeds the TPS and MIS. With these decision-making systems, we can manipulate the data by interlinking them according to different criteria. At the same time, they present the resulting information in such a way that we can easily visualize trends and positions in relation to an original plan with the main objective of having the judgment elements in a fast, easy way. • The alternatives offered by DSS thanks to this data manipulation are the result of a process conformed to by a group of analysis: statistical tools, sensitive analysis, scenarios, representations, and graphical transformations, etc. • These systems are characterized by their orientation to the resolutions of problems coming from the final user. This means that they are relatively simple to
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 779
•
•
• • •
manipulate, personalized and contain rather intuitive and interactive interfaces that allow the manager to develop his or her own models of analysis. However, they are not autonomous systems. This means that they do not offer solutions apart from the user and they are not the result of automating the decision making processes of experts. This means that they widen the capacities of users, but they do not substitute them in the decision-making process (Turban and Aronson, 2001). They share some similarities with expert systems. Expert systems are software applications that can offer pieces of advice to a manager beyond helping him/her to manipulate information in order to come to a decision. They also use access to the internal and external database systems and more importantly they offer information related to the way some experts in a certain problem have solved some other similar ones. Expert systems and the DSS are defined and implemented in knowledge areas and concrete problems, although the DSS are of wider application. As for any information system, their components can be divided into two groups: hardware and software. The DSS also use personal computer with good external and internal communication capacities with the rest of the firm’s information systems. This connection with the communication networks allows them the access to database systems. The software is the faster component identified with an information system and it is normal to recognize them by its applications. The DSS applications that comprise it are named generators of DSS and contain three big modules: data management module, management of models, and user interaction model.
Among the DSS, we can find those that realize the same functions but support the group decision-making process. They are called group decision support systems (GDSS) and the only thing that separates them from the first ones mentioned is that they include two more components, group work tools and a specific software that allows the coordination of the actions coming from each of the decision makers in the group. DSS systems have two main qualities that make them profitable for firms in terms of efficiencies and cost savings: • One of the qualities is that data collection and response times are time-consuming. A human being is limited when collecting information to build scenarios and models close to the reality. DSS systems increase the possibilities of working with more information and they are able to build models and more complex analysis rapidly. • The other quality is simplicity. The software and hardware components are highly personalized and are based on processing by using personal computers that allow the easy self-management of the decision-making processes by users. As an example of DSS, we can mention the data warehousing systems (Fig. 5). They are systems that organize and process information in a company coming from different internal or external sources of information.
March 15, 2010
14:46
780
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
Query Designer
One Click
Web Application Designer
Data
Any data source
Figure 5. A DSS system.
Mainly, they perform three main functions: • • • •
Collect the data from different sources of information Organize the warehousing of the data Define the queries according to the kind of required report Execute the queries to present the reports when they are accessed by the user
8.3. The Systems at the Strategic Level At this level, we find the Executive Information Systems (EIS). These systems warehouse the information of others in the organizational pyramid; they also incorporate external data that allows them to filter and make use of critical data. The EIS realize a similar function as the DSS but at a higher hierarchical level. At this level, the decisions are unstructured and depend to a great extent on the experiences and the manager’s heuristic behavior. However, their capacity to help managers is not in the use of more or less complex models, but in their capacity to condense data of from various sources in a visual way. The most interesting thing, then, is not in the analytical capacity they offer but in the faculty they possess to show the reality of what happens inside and outside the organization from different perspectives. This way, they help managers at a strategic level in their critical functions to establish firm control of the whole organization no matter its size, to make a strategic plan and solve the crisis and problems the firms face.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 781 Strategy
Discuss the strategy
Update the strategy Balanced scorecard Budget
Consolidate
Inform
Results
Initiatives and programs
Inputs (resources)
Outputs (results)
Figure 6. A balanced scorecard schema. Client·s management
Management of operations
Strategic processes
To provide rapid response
To minimize problems
Families strategic works
Quality manager
Crossed sale line of products
Manager of call center
Citizen (responsible) To understand the client´s segments
To change to appropriate channel
Financial authorized planner
Telephonic operator
Client´s management
Diverse staff
Community recruiter
Strategic portfolio of information
Applications of transformation
Analytical applications AC
Transactional applications
Technological infrastructure
2
Figure 7.
4
AC
4
CB
3
SR
3
FI
2
SS
6
P
3
SG
2
SG
2
CB
2
GL
6
G
2
AE
4
WE
3
IV
3
WE
3
WE
3
CT
4
IO
4
CR
2
CT
4
R
Key:
A
• OK • Small improvement
R
A
R
• New development (according to program) • New development (delayed)
3
CB
2
CB
3
SF
2
SG
2
PRO
2
CR
2
WE
3
SR
V
V
V
• Big necessary improvements (no action) • New necessary application (no action)
Balanced scorecard of processes and systems applications in a firm.
As an example of an EIS system, we can cite the balanced scorecard system. They try to measure the behavior of a company according to the required strategic objectives (Fig. 6). For a group of ratios associated to the different business areas this, needs to be established. The balanced scorecard systems gives to the top
March 15, 2010
14:46
782
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
management information relevant to the evolution of the established ratios and their deviation from the pre-established objective. These systems work by exception, by establishing alarm systems according to the data collected and the defined objectives (Fig. 7). 8.4. The Office Systems The office systems constitute the main tool that allows any user in the organization the development of less structured tasks that involve the treatment of information that is often related to office tasks. They are identified with software packages, more or less generic ones, of a high compatibility and communicability, and are easy to manipulate. They have not been included in Fig. 4 as they are dispersed along the whole organization and are more difficult to identify and separate them from the rest of the systems in the firm each time. These systems are included in individual applications for the user and for those that allow interaction with other users. Today, these tools and some other systems such as the DSS and GDSS and some techniques such as data mining are considered to have enough capacity to identify, generate, and distribute the knowledge inside the organization. An example could be the corporate tool for managing email (Microsoft Outlook, Lotus Notes, etc.). 9. A Perspective Today: The Integration from Inside and the Coordination with the Environment The different information systems that we have referred to before must be related to each other. The information that they produce constitute the raw material for the upper decision-making levels. If there is no proper communication/relationship among the systems, the information flows would be broken or would not arrive where needed. If these systems are conceived, created, and installed in an isolated and independent way, uniquely thinking in the level of decision they serve or in the functional activities, they are created for, they lose a great part of objectives to support and the entrepreneurial tasks and objectives. For this reason, the integration of the systems is needed. The concept of systems integration is important for understanding the actual tendency in the development the information systems. To integrate means to join, to put together a system or group of elements related to each other. In managerial terms, the management of information systems means that the information must be coordinated, and they are not following different objectives: • • • •
Various hardware resources are not incompatible among the different systems. Same data are accessible by different systems. Systems understand each other on a software level. They are properly communicated.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 783
• Modifications in some or other systems do not negatively affect to the rest. • Different systems can grow at the same time without losing cohesion and consistently. A firm that wants to offer a fast response to a new customer need will have to dispose of an information system that allows involving the whole organization in the response by proportioning continuous flows of information that allow deciding and acting. For that reason, counting with an integrated and coordinated firm’s information system is needed for the organization as it allows working as a whole flexible tool during environmental changes. Integrating systems takes money and requires tedious and hard processes. When the starting point of the firm is having different systems and hardware architectures, old and isolated ones must try to take into account the changes in the process. Otherwise, it will fail with their expectations of reaching the expected competitive advantages with this new technology. In most parts of organizations, there are processes that bypass the barriers of functional areas and they are highly supported by the information system. Often, these are the processes that differentiate the performance between one and another organization. If the information systems were only able to help and sustain the objective of the various firm’s functions, then they would be impeding the proper good of the organization to prosper and be related with the environment. For these reasons, it is important for the manager to take into account the advantages and disadvantages in looking for solutions that are a high reach but more effective as it can be the acquisition or design of ERP systems or CRM systems. These systems answer not only to functional needs of the various levels of decision in the organization, but also specially to the global needs of flexibility and coordination that together with innovation are the most important tools to compete or collaborate and to answer the new situations that firms are facing. 9.1. The Enterprise Resource Planning Systems ERP systems are integral information systems that allow the execution and automation of business processes in all the functional areas in a coordinated way. These systems need a platform composed of IT common to the whole firm and it is concreted in software packages between tailor-made software and a standard application. Generally speaking, these solutions for information systems offer a proper use of the resources and a cost reduction in comparison with the development of independent information systems. Besides, they are cheaper than the option of building and implementing what we call the middleware or software bridges among the various systems and platforms that allow the final integration. Two of the key aspects that determine an ERP are • The integration in two different aspects: the integration of information and the integration of business processes. With an ERP system, only a single data
March 15, 2010
14:46
784
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
warehouse is created to feed the data in any business process and on time to proportionate its access to any other process that requires it. Any potential redundancies and lack of congruence are eliminated from the information. Besides, the globalization of standard software tools and the common technological platform that it requires admit a complete compatibility and understanding among the various functions by allowing, at the same time, the development of all the business processes. Apart from this, a programming environment is offered with a great capacity for the development of new applications according to new problems. • The modularity: This entrepreneurial application offers the possibility of a progressive implementation by a modular system, almost always identifying them with functional areas, without losing the integrity and the independency in the functionality. Apart from all this, ERP systems are able to provide information for the decisionmaking process at all managerial levels in an organization. Each time, they are prepared to offer firms more possibilities of integration of the firm’s information systems with environmental agents, providers, distributors, final customers, or other firms (Fig. 8). As with any information systems that are created or implemented in an organization, it has its problems. They involve important changes in the way the different firms operate, and all the stakeholders in the firm must be informed and involved in the process. They also require strong investments in technology and a clear definition of their objectives. Due to their hybrid nature between tailor-made software Supply chain control
Other modules
Materials management
Financial accounting
Communications with remote stations
ERP Production planning
Sales and distribution Solutions development
Figure 8. An ERP system.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 785
and standardized software, they are not complex to manipulate but proper training of the people interacting with them is needed and a customization of the software to fit the specific characteristics of the is required firm to obtain an effective result. We can find in the market a great variety of ERP solutions for different industries (banks, insurance companies, fashion, groceries, building, health care, etc.). Although the different areas to implement in an ERP system depend on the chosen software provider and the industry where the firm operates, the more frequent areas are • • • • • • •
Sales Management of stocks Control of the supply chain Financial and control management Planning and control production Management of the human resources Management of reports
9.2. The Customer Relationship Management Systems The CRM systems has an external focus. They are implemented to have a complete vision of the external flows of the information, those that have more to do with customers. CRM systems are not incompatible with the ERP systems. In fact, they are complementary systems, as they represent the interest that the firms show in extending the internal information obtained by using an ERP system beyond the barriers of the firms. The CRM system is a complete way of understanding the relationships with customers. They consider customers a strategic asset to maintain and exploit in the long term. From the technical point of view, the CRM is a group of technological tools, mainly software, that it is going to allow not only the consolidation of all the needed information on customers and improve decision-making, but also to offer a greater access to the company he/she wants identified. This means the alignment of all those business processes that the firm is involved in and making them more profitable. As in the case of other information systems, its implementation in the firm will lead to a process of change. This process of change will affect in a special way not only the business processes, but also the way in which the people in the organization are going to relate to the customer. Among the multiple objectives that firms are trying to look for with the CRM systems, we can stress the following ones: • A unified vision of the customer so that the whole company, in case a claim is sent, will allow the customer and the acquired product to be tracked, or, in the case of a new need, potential customers to be tracked. • The integration of sales, marketing, and services functions. • Stronger customer’s loyalty. • Identifing the new business opportunities.
March 15, 2010
14:46
786
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
Some of the components that allow the reaching of these objectives are as follows: • The electronic applications for the sales force or distribution channels, as they help the sales and marketing people who work outside the office. They can provide contact lists, business intelligence or generate proposals, etc. • Tools to improve strategic planning, and coordinate and measure the marketing efforts. • Applications for customer support and service such as permanent call centers, or tools to make direct access to customers to a part of the information system easier (Fig. 9). As an example of the CRM, we can find the systems that manage the sales force. These systems allow the marketing people in a firm to manage their work from mobile devices without requiring physical movement from the office. This way, the management of accounts payable can be made faster. From these devices, they access the relevant information they need for daily work. The objectives of the incorporation of these systems in the company can be studied from different points of view: • Organization ◦ An organization best oriented to the customer, thanks to the reduction of administrative tasks that do not offer added value. ◦ The knowledge coming from the customer belongs to the company and it is shared by all the departments. ◦ A reduction of the risk produced by the rotation of the sales force is produced.
BACK OFFICE
FRONT OFFICE
ERP/ERM
Supply Chain Mgnt
Order Mgnt
Order Promising
Customer Service
Marketing Automation
Sales Automation
Mobile Sales
Field Service
MOBILE OFFICE
CUSTOMER INTERACTION
ANALYTICAL CRM Business Performance Mgt
Voice
Closed Loop Processing (EAI Toolkits, Embedded/Mobile Agents)
OPERATIONAL CRM Business Operations Mgt
Legacy Systems
Conferencing
E-mail
Web Conference
E-Resp. Mgnt
COLLABORATIVE CRM Business Collaboration Mgt
Figure 9. A CRM system.
Data Warehouse
Customer Activity Data Mart
Customer Data Mart
Product Data Mart
Vertical Apps
Marketing Automation
Category Mgnt
Campaign Mgnt
Fax/Letter
Direct Interaction
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 787
• Processes ◦ To eliminate administrative tasks ◦ To promote the entrance of new customers ◦ The transmission of information related to products and prices to the marketing people ◦ The analysis of budgets and the obtaining of reports ◦ The planning and control of marketing activity • People ◦ To provide sales people with a tool that allows them to prepare their commercial actions with customers ◦ Marketing managers will count on better data in the situation • Technology ◦ The integration of data with the corporate systems and mobile devices 10. Conclusions Information and communication technologies have become one of the main elements in the decision-making process in firms operating in a global context. We have described a group of information and communication technologies and how they can assist different types and levels of the decision-making process in organizations. As we have shown, the kinds of decisions facing businesses varies considerably. Structure decisions are repetitive, routine decisions. Unstructured decisions are non-routine ones. There is no universal way for producing unstructured decisions. Between these two types, we can find “semi-structured decisions.” In these last ones, only a part of the problem has a clear answer according to a recognized procedure. According to their nature, we have given a description of different groups of information and communication technologies attending the decision-making process of decisions of different natures in the organizations. For example, we have stressed the main role or data warehousing and ERP systems helping to share information essential for structured decisions. Data mining and artificial intelligence techniques are essential tools for identifying trends. MIS and some types of DSS help the decision-making in unstructured strategic level decisions. References Andreu, R, J Ricart and J Valor (1993). Strategy and Information Systems. New York: McGraw Hill. Arkin, AP and D Fletcher (2007). Fast, cheap and somewhat in control. Genome Biology, 7(8), 1141–1146. Boisot, M (1999). Knowledge Assets: Securing Competitive Advantage in the Knowledge Economy. Oxford: Oxford University Press.
March 15, 2010
788
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
C. de Pablos Heredero and M. de Pablos Heredero
Cash, J, W McFarlan and J McKenney (1992). Corporate Information Systems Management, The Issues Facing Senior Executives. New York: Irwin. Clemons, EK and L Hitt (2004). Poaching the misappropriation of information: Transaction risks of information exchange. Journal of Management Information Systems, 21(2), 87–107. Davenport, T (2006). Information technologies for knowledge management. In Knowledge Creation and Management: New Challenges for Managers, Kazuo, I and N Ikujiro (eds.), Oxford: Oxford University Press. Davenport, T, L Prusak and B Strong (2008). Managing knowledge: One size doesn’t fit all. Wall Street Journal, 16, 35–43. De Pablos, C (2000). A Empirical Relationship Between the Investment in Information Technologies, Organisational Variables and Firm’s Performance: A Study Applied to the Spanish Insurance Industry. Madrid: UCM. De Pablos, C (2004). Management Information Systems in the Organisation. Madrid: ESIC. De Pablos, C (2006). Management of the Information Systems: An Integral View, 2nd Ed. Madrid: ESIC. Eldon, YL (1997). Perceived importance of information system success factors: A meta analysis of group differences. Information and Management, 32(1), 15–28. Erden, Z, G Vkrogh and I Nonaka (2008). The quality of group tacit knowledge, The Journal of Strategic Information Systems, 17(1), 4–18. Gordon, E and M Tarafdar (2007). Understanding the influence of information systems competencies on process innovation: A resource-based view, Journal of Strategic Information Systems, 16(4), 353–392. Gordon, E, M Tarafdar, R Cook, R Maksimoski and B Rogowitz (2008). Improving the front end of innovation with information technology. Research Technology Management, 51(3), 50–58. Gurbaxani, V and S Whang (1991). The impact of information systems on organisations and markets. Communications of the ACM, 34(1), 59–73. Hammer, M (1990). Reengineering work: Don’t automate, obliterate. Harvard Business Review, 68(4) 104–112. Katz, KE and RE Rice (2002). Social Consequences of Internet Use. Cambridge, MA: The MIT Press. Laudon, KC and JP Laudon (2004). Management Information Systems, 8th Ed. New York: Prentice Hall. Nevan, J and R Basu (2008). Project management and six sigma: Obtaining a fit. International Journal of Six Sigma and Competitive Advantage, 4(1), 81–94. Nonaka, I (1995). The Knowledge Creating Company. New York: Oxford University Press. Phambuka-Nsimbi, C (2008). Creating competitive advantage in developing countries through business clusters: A literature review. African Journal of Business Management, 2(7), 125–130. Parker, CH and TH Case (1993). Management Information Systems: Strategy and Action, 2nd Ed. Watsonville: McGraw Hill. Porter, M andVE Millar (1985). How information gives you competitive advantage. Harvard Business Review, July–August, 149–170. Powell, TC and A Dent-Micallef (1997). Information technology as competitive advantage: The role of human, business and technology resources. Strategic Management Journal, 18(5), 375–405.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
Tools for the Decision-Making Process in the Management Information System 789
Rajan, M and R Rajan (2007). Knowledge-Driven change in academic organizations: A knowledge management perspective. International Journal of Management, 43(2), 56–63. Sethi, V and WR King (1994). Development of measures to assess the extent to which an information technology application provides competitive advantage. Management Science, 40(12), 1601–1627. Shapiro, C and HR Varian (1999). Information Rules: A Strategic Guide to the Networked Economy. Boston, MA: Harvard Business Press. Soret, I, C De Pablos and JL Montes (2008). Efficient consumer response (ECR) practices as responsible for the creation of knowledge and sustainable competitive advantages in the grocery industry. Issues in Informing Science and Information Technology, 5, 601–621. Turban, E and JE Aronson (2001). Decision Support Systems and Intelligent Systems, 6th Ed. London: Prentice-Hall. Turban, E, D Leidner, E McLean and J Wetherbe (2007). Information Technology for Management. Transforming Organisations in the Digital Economy. London: Prentice Hall. Walsham, G (1993). Interpreting Information Systems in Organizations. New York: Wiley. White, A, EM Radi and M Mohdzain (2005). The role of emergent information technologies and systems in enabling supply chain agility. International Journal of Information Management, 25(5), 396–410.
Biographical Notes Carmen de Pablos has been a Professor of Business Administration at the Rey Juan Carlos University in Madrid, Spain since 1994. She specializes in the impact of information technologies on organizational systems and entrepreneurship, where she conducts her main research. She has presented at conferences in different international venues and has published in specialized journals. She has also worked as a consultant in the area of Information Systems management at Primma Consulting. M´onica de Pablos has been a part-time Assistant Professor of Business Administration at the Rey Juan Carlos University in Madrid, Spain since 2004. She has been the CIO of the InSitu firm, and has extensive experience in the management of information systems in consultancy firms since 1996.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch32
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Chapter 33
Preliminaries of Mathematics in Business and Information Management MOHAMMED SALEM ELMUSRATI Head of Telecommunication Engineering Group, Faculty of Technology, University of Vaasa, PL700, 65101 Vaasa, Finland [email protected] mohammed [email protected] http://lipas.uwasa.fi/∼moel/
Mathematics is an important that for modeling, analysis, and evaluation of several branches of business and information management. The main goal of this chapter is to introduce fast but informative revision of some concepts of applied maths. Several topics have been discussed, such as optimization theory (single and multi-objectives), estimation theory and game theory. These topics have been selected because of their influence in business management. Moreover, several examples are given to clarify some important principle. Keywords: Estimation theory; game theory; optimization theory.
Mathematics is the language used to quantify things in our life such as ideas, concepts as well as modeling real behaviors. This is true for almost all branches of life from engineering to economics to social relations. The numbers in their abstract form may not be that interesting. Their importance is realized when they are associated with something that influence our life in all aspects and branches. Assume that one wants to introduce his offer to win a contract for a certain project. He needs to take many issues into account. For example, he/she should precisely estimate the total costs of the project, the required resources, possible risks and their costs, maximizing the gross profit without losing the project (if the offer is too expensive, her/his chance to win the project will reduce), the management of the complex and different entities in the project, and also considering the execution time constraints of the project. Mathematics plays a main role in all stages of such a project. When we say estimation, we mean estimation theory. Maximizing or minimizing means optimization theory. Optimum management means different mathematical approaches such as operational research and information theory. To act optimally in a competitive environment (where others apply to get the same project, your relations with the buyer, with subcontractors, or with your employee), 791
March 15, 2010
14:46
792
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
you may need to utilize game theory. With game theory you will be able to act with the optimum strategy which maximizes your payoff or at least minimizes your losses. Usually we have many objectives (multiobjective) that we want to achieve. For example, we want to • • • • •
Maximize our gross profits Minimize our risks Maximize our chances to win the project Maximize the quality of the project Maximize the information flow between project entities
We can see that the first objective conflicts with the third, because if the project is too expensive the chance to lose the project increases. The first objective conflicts also with the fourth objective, since increasing the project quality will increase the total costs which reduce our gross profit. How does one handle such conflicted objectives? There is an interesting branch of optimization theory known as multiobjective optimization which shows how to find one feasible solution to such problems. Information flow between any related project entities is rather important for enhancing productivity as well as efficiency. Examples of these important interrelations are the human–machine, machine–machine and human–human relations. No doubt that when the information flow between any of the related project entities is not good, this will lead to performance reduction and prolong project execution. Mathematics can be used to find any bottleneck of the information flow in a project at any stage or level. It may also propose some solutions to increase the information flow or the throughput. In this chapter we will cover the following topics very briefly: 1. 2. 3. 4. 5.
General preliminaries Introduction to optimization theory Introduction to multiobjective optimization Introduction to estimation theory Introduction to game theory
We will introduce some mathematical concepts (in an abstract way) related to the above issues. This can be a fast reference for the readers of this book. However, there will not be any detailed applications of these concepts in neither business nor information management. 1. General Preliminaries In this section, we will introduce some very general math preliminaries which can be a useful reminder. The rate of change of certain function f with respect to its independent variable x is known as differentiation, if y = f(x), then the rate of change of the dependent variable y with the change of the independent variable x is denoted as dy/dx.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
793
From the definition, one may express the rate of change as: f(x + ) − f(x) dy = lim dx →0
(1)
Most of the well-known differentiation formulas and rules are derived from the previous limit formulation. For example, if y = f(x) = xn , then
(x + )n − xn dy = lim dx →0
and since (x + )n = xn + nxn−1 + O(2 ) ⇒
O(2 ) dy = nxn−1 + lim = nxn−1 →0 dx
O(2 ) means that the other terms are of order 2 or higher. This explains why the second term of the previous formula is zero. The same procedure can be followed to prove the derivative rules of other functions like trigonometric and logarithmic functions. Remember that not all functions are differentiable. What are the necessary conditions for a function to be differentiable? We leave the answer of this question to the readers. In general, the function can be a function with more than one independent variable such as y = f(x1 , x2 , . . . , xn ), in this case, the function can be differentiated with any of the independent variables such as ∂y/∂xi = ∂f(x)/∂xi , i = 1, . . . , n. Bended (d) is used here to indicate the partial differentiation. Moreover, bold x means a vector of all independent variables, i.e., x = [x1 , x2 , . . . , xn ]T , where symbol T refers to Transpose, i.e., to indicate the column vector. This makes the formula more compact. We can differentiate scalars with respect to vectors and the result is another vector such as: ∂f(x) df(x) = fx dx ∂x1
∂f(x) ∂x2
···
∂f(x) ∂xn
T (2)
The previous vector is called the gradient of f . The total differential df with respect to all independent variables is given by
df =
∂f(x) ∂x1
∂f(x) ∂x2
···
dx1 n ∂f(x) dx2 ∂f(x) dxi .. = ∂xn . ∂xi dxn
i=1
(3)
March 15, 2010
794
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
Moreover, it is possible to have the second differentiation such as: 2 ∂2f/∂x1 ∂x2 · · · ∂2f/∂x1 ∂xn ∂ f/∂x12 ∂2f/∂x ∂x ∂2f/∂x22 · · · ∂2f/∂x2 ∂xn 2 1 d 2f(x) fxx = . . 2 . . dx . . ∂2f/∂xn ∂x1
···
∂2f/∂xn ∂x2
(4)
∂2f/∂xn2
The previous n × n matrix is known as the Hessian matrix. It is known from basic calculus that it is possible to represent functions (under some conditions) as a series. One of such representations is the Taylor series expansion about x0 and it is given by 1 f(x) = f(x0 ) + fxT |x0 (x − x0 ) + (x − x0 )Tfxx |x0 (x − x0 ) + O(3) 2
(5)
where O(3) represents the terms of order 3 (and more), and fxT |x0 means the transpose of the gradient vector (2) evaluated at x0 . In many practical situations we need to evaluate more than one function simultaneously. If we have vector of m functions and all functions are function in the same n independent variables, i.e., F(x) = [f1 (x)
f2 (x)
···
fm (x)]T
(6)
It is possible to find the partial differentiation of vector F with respect to every independent variable xi , the resultant matrix called the Jacobian of F and it is given by ∂f1 /∂x1 ∂f1 /∂x2 · · · ∂f1 /∂xn ∂f2 /∂x1 ∂f2 /∂x2 · · · ∂f2 /∂xn ∂F (7) = Fx . . . . .. .. .. .. ∂x ∂fm /∂x1 ∂fm /∂x2 · · · ∂fm /∂xn Moreover, if we have another n × 1 vector y = [y1 matrix A, the following gradients are given by
y2
∂ ∂ T (y Ax) = (xT AT y) = AT y ∂x ∂x ∂ ∂ T (y x) = (xT y) = y ∂x ∂x ∂ T ∂ (y F(x)) = (FT (x)y) = FxT y ∂x ∂x ∂ T (x Ax) = AT x + Ax ∂x
···
yn ]T and n × n
(8) (9) (10) (11)
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
If we have another vector of functions G(x) = [g1 (x) then ∂ T (F G) = FxT G + GxT F ∂x
g2 (x)
···
795
gm (x)]T , (12)
Exercise. Using the previous definitions, prove that: ∂2 xT Ax = A + AT (13) ∂x2 Readers may refer to the previous formulas to check the results of the following sections. The term of convexity is very important, especially in optimization. In its basic form (for a single independent variable), the function is convex if its curve will be under a straight line connecting any two points on the curve within the feasible range. Mathematically, the function f(x) is convex if f(λa + (1 − λ)b) ≤ λf(a) + (1 − λ)f(b)
(14)
where λ is any scalar such as λ ∈ [0, 1], and a, b are any two points on the curve within the feasible range. The function is called concave if we replace “≤” with “≥” in Eq. (14). The term convexity is also applied for sets. A set S ⊂ n is convex if for any two points a and b such as a, b ∈ S and for all λ ∈ [0, 1] we have λa + (1 − λ)b ∈ S
(15)
The concepts of convexity of functions and sets are shown in Fig. 1. In another topic, all real signals and measurements have some degree of randomness or uncertainty. Even signals generated in labs have some degree of uncertainty because of the added noise or the distortion of the measuring device. For example, a stock will never follow predetermined pattern because it depends on many things which make it impossible to be formulated in a deterministic form. The deterministic signal is that one which has exact relation with its independent
Convex set
Convex function
Figure 1.
Nonconvex
Nonconvex
Convex and non-convex demonstration.
March 15, 2010
796
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
variable (e.g., time). However, we may predict the future with limited confidence if we understood the statistical characteristics of the stock. If we observe certain random signal, then the past can be quantified exactly because it has already happened, but we do not know exactly how it will be in the future. However, we could be able to predict the future values. To do that, we need to study the random characteristics of signals. When the random signal is varying with some independent variable such as time or location, we call it a random or stochastic process. If we deal with random signals and variations with time such as the prices of certain shares, we need to understand the general behavior of this randomness. One way to model continuous random signals is by using probability density functions (pdf ). To understand the pdf, assume a random signal has a certain pdf as shown in Fig. 2. The x axis indicates the values taken by signal x. The y axis indicates the probability density values. Remember that the y axis does not represent the probability of occurrence. This is true for discrete random variables (in this case it is called probability mass function). However, the shape of the pdf indicates the characteristic of the signal x. For example, from Fig. 2, it is clear that the signal is always positive. In other words, the probability that x will be less than zero is zero. Hence, although the signal is random, we know that it is always positive. It is also clear that there is a high chance that the signal amplitude to be around 0.7. Moreover, it has the small probability to be more than 2. Since the pdf function, is a density function then we can compute the probability that the signal x to be between x1 and x2 by calculating the area under the curve between x1 and x2 as indicated also on Fig. 2. The area under curves is computed using the integration such as:
x2 fX (x)dx (16) Pr (x1 ≤ x ≤ x2 ) = x1
0.5
0.4
y
0.3
0.2
0.1
0 −1
0
Figure 2.
1 x1
x2 2 x
3
Example for the pdf function.
4
5
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
797
where fX (x) denotes the pdf. The pdf function has some important characteristics such as 1. fX (x) ≥ 0, because the probability cannot be negative. 2. The cumulative density function cdf, F(λ) = Pr (−∞ ≤ x ≤ λ) = λ −∞ fX (x)dx, is a monotonically increasing function with respect to λ with maximum 1. ∞ 3. The total area under the pdf function curve must be one, i.e., −∞ fX (x)dx = 1. If we can determine the pdf function of the random signal, then most of the signal behaviors (as mean, variance, higher modes, etc.) can be determined. Generally speaking, the averaging of a function g(x) of the random variable x is given by:
E[g(x)] =
∞
−∞
g(x)fX (x)dx
(17)
Our previous representation is valid if the signal characteristics are time invariant. The general expression of random process with time could be denoted as fXt (x, t). The mean of random signals is given by
E[x] =
∞
−∞
xfXt (x, t)dx = µX (t)
(18)
2 (x(t) − µX (t))2 fXt (x, t)dx = σX (t)
(19)
and the variance is given by
E[(x(t) − µX (t))2 ] =
∞
−∞
There is another extremely important measure for random signals which is known as the correlation measure. If you want to predict the stock share at a certain time in the future based on known past data, you will use some of the well-known tools of prediction or extrapolation. But how accurate will our prediction be? Actually, if the prediction is based on the available data from the past, the answer will depend on the autocorrelation behavior of the random signal. If the signal has very low autocorrelation, then whatever tool we use, the prediction accuracy cannot be guaranteed. The reason is that, since the prediction is based on the past observations, the correlation between the past data and the future is low. The autocorrelation is used also as a measure of the randomness of the signal. For example, some types of noise are uncorrelated, i.e., the correlation is zero. This implies that, if we know the signal level at certain time, this knowledge cannot be used to predict the signal at the next time with any valuable accuracy. The autocorrelation of random signal
March 15, 2010
798
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
between two time instants t1 and t2 is given by
∞ x(t1 )x(t2 )fXt1 ,Xt2 (x, t)dx = R(t1 , t2 ). E[x(t1 )x(t2 )] = −∞
(20)
If the signal is wide-sense stationary, i.e., its mean and correlation are time invariant, then the mean and the autocorrelation presentations are given by
∞ xfX (x)dx = µX (21) E[x] =
E[x(t1 )x(t2 )] =
−∞ ∞ −∞
x(t1 )x(t2 )fX (x)dx = R(t2 − t1 )
(22)
Now, the autocorrelation function does not depend on the time instants but on the time differences. If all modes are time invariant, we call the random process strictsense stationary. Moreover, it is called an ergodic process in the mean, if the mean over one time realization is enough to find the actual mean value, i.e.,
∞
1 xfX (x)dx = lim E[x] = T →∞ T −∞
T/2 −T/2
x(t)dt = µX
(23)
It is possible also to define the ergodicity property for the second moment such as the autocorrelation function. In this case we have
1 T/2 x(t)x(t + τ)dt = µX (24) RXX (τ) = lim T →∞ T −T/2 It should be observed that any ergodic process must be stationary but the reverse is not necessary. In case we have several random variables, the random behavior can be expressed using the joint pdf fX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn ). One can determine the pdf function of one random variable (say x1 ) by averaging over all other variables such as:
∞ ∞
∞ fX1 (x1 ) = ··· fX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn )dx2 · · · dxn (25) −∞ −∞
−∞
If the random variables are independent, then it is possible to say fX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn ) =
n
fXi (xi )
(26)
i=1
This greatly simplifies the analysis. However, one should be able to justify the independent assumption. Another important issue here is the conditional distribution. What is the distribution of certain random variable (say x1 ) given the other random
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
799
variables? The answer is given by fX1 | X2 ,...,Xn (x1 | x2 , . . . , xn ) =
fX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn ) fX2 ,...,Xn (x2 , . . . , xn )
(27)
If all variables are independent show that fX1 | X2 ,...,Xn (x1 | x2 , . . . , xn ) = fX1 (x1 ). We may close this part by Bayes’ theorem which is one of the most important theorems in the area of estimation theory and other branches. If we assume one two-joint random variables, then Bayes’ theorem states that: fX1 | X2 (x1 | x2 ) =
fX2 | X1 (x2 | x1 )fX1 (x1 ) fX1 ,X2 (x1 , x2 ) = ∞ fX2 (x2 ) −∞ fX2 | X1 (x2 | x1 = α)fX1 (α)dα
(28)
It is clear that Bayes’ theorem gives us the possibility to reverse the conditional distribution. 2. Introduction to Optimization Theory Optimization is the procedure to find the best solution within the given constraints. If the optimization is carried out with cost (or risk) function, then the optimization process is to find the minimum of that cost function within the allowed range. On the other hand, if it is carried out with a reward or utility function, then we are looking for the maximum of that function within the allowed range. These points of minima or maxima are called extreme points. If the function is smooth and the extreme point is also stationary, then at those points the differentiation (i.e., the rate of change) is zero. If the extreme point is a minimum and the objective function is smooth and convex, then just before that point, the slope of the curve will be negative, and just after this point the slope becomes positive. For this reason the differentiation at the extreme point is zero for smooth functions. If our objective represents cost function, then the general optimization formula could be stated as: minimize f(x), x = [x1 x2 · · · subject to yi (x) = 0 i = 1, 2, . . . , k1 gi (x) ≥ 0 i = 1, 2, . . . , k2
xn ]T (29)
where f(x) is the objective function (here it represents a cost or risk we want to minimize), x is the column vector of n independent (or decision) variables, y(x) represents the quality constraint (in this formulation we assumed that there are k1 different quality constraints), g(x) is the inequality constraint (in this formulation we assumed that there are k2 different inequality constraints). There are many
March 15, 2010
800
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
methods in the literature to solve the general formula in Eq. (29) either analytically (i.e., in closed form) and/or iteratively. However, there is no one method which can handle all kinds of optimization. For example, if the optimization problem functions (objective and the constraints) are all linear, then we need to use linear programming techniques to handle the problem. And if the decision variable x takes only integer values, then we use integer programming or combinatorial optimization to handle such problems. If the problem functions are smooth nonlinear, there are many methods to find at least local optimum solution. Generally speaking, the condition of the global optimum x∗ is that f(x∗ ) < f(skew2ˆx),
∀skew2ˆx ∈ Sx ,
skew2ˆx = x∗
(30)
where Sx is the set of feasible values of x (which is determined by the constraint functions). On the other hand, the local optimum is defined similarly to Eq. (30) but for a local neighborhood set for the local optimum point instead of all the feasible sets. If the objective function is smooth with continuous first and second derivatives for all the feasible sets Sx , then a point x∗ ∈ Sx is a stationary point of f(x) if: fx (x∗ ) = 0
(31)
where fx (x∗ ) is the gradient function (see Eq. (2)) defined at x∗ . But we do not know yet if this stationary point is minimum (local optimum) or maximum (worst case), or just saddle point. We can check that by evaluating the second derivative of the objective function. As we show in Eq. (4) the second derivative results the Hessian matrix fxx . The stationary point is strong local minimum if the Hessian matrix is positive definite at x∗ , i.e., if uT fxx (x∗ )u > 0
∀u = 0
(32)
This condition is a generalization of convexity. Example. Find all stationary points of the following objective function 1 1 f(x) = − x3 + x2 + 2x − 1 3 2 Then determine the maximum and minimum points. Since we have single independent variable in this example, then the analysis will be straightforward such as d f(x) = −x2 + x + 2 = 0, dx then we have two stationary points at x1 = 2, x2 = −1. We may check the curvature of the curve at every stationary point by evaluating d2 the second derivative at each point as: dx 2 f(x) = fxx = −2x + 1. It is clear
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
801
2.5 2 1.5
Strong local maximum
1
f (x)
0.5 0 −0.5
Global minimum in the constraint range
−1 Strong local minimum
−1.5 −2
Constraint range
−2.5 −2
−1.5
−1
Figure 3.
−0.5
0
0.5 x
1
1.5
2
2.5
3
Example for the different extreme points.
that the second derivative is negative at x1 which means that this point is strong local maximum. Furthermore, the second derivative is positive at x2 , which means that this point is strong local minimum. Figure 3 shows the function f(x) for the range −2 ≤ x ≤ 3. If we want to find the minimum of the function f(x) in this range, it is clear that x2 is the solution. But if the feasible set (i.e., under the constraints) is that 0 ≤ x ≤ 3, then the optimum point is 0! This shows that in constraint optimization, the optimum solution can be an endpoint rather than a true stationary point. The general optimization problem, Eq. (29), can be solved by the Lagrange method which is based on the transformation of the constrained formulation to an unconstrained one such as minimize L(x, λ , µ ) = f(x) +
k1 i=1
λi yi (x) +
k2
µi (gi (x) − si )
(33)
i=1
where λi and µi are called the Lagrange multipliers associated with equality and inequality constraints, respectively and si ≥ 0 is a slack variable. Because it is needed in the constraints that gi (x) ≥ 0, then we can say that gi (x) = si ⇒ gi (x) − si = 0. Necessary and sufficient conditions for optimality of Eq. (33) are
March 15, 2010
802
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
(* refers to the optimum parameters): fx (x∗ ) +
k1
λ∗i yix (x∗ ) +
i=1
k2
µi gix (x∗ ) = 0
(34)
i=1
∗
yi (x ) = 0
(35)
gi (x∗ ) = si∗
(36)
µ∗i si∗
(37)
=0
where fx (x∗ ), yix (x∗ ), and gix (x∗ ) are the gradient vectors at the optimum solutions. When the objective function and the constraints are all linear functions, it is not possible to use the previous techniques because the functions are not smooth. If you differentiate the linear function with respect to the independent variable, these independent variables will disappear and you will not be able to find the optimum point. Other efficient techniques such as the simplex method are used in this case. There is another type of optimization where the decision variables take only integer values. There are different models of this type of optimization such as traveling salesman person (TSP) and knapsack problem. As an example, Multiple Choice Knapsack problem is the problem of choosing exactly one item j from each of k classes Ni , i = 1, 2, . . . , k such that the profit sum is maximized. The mathematical formulation of the problem is defined as
max
k
pij xij
(38)
wij xij ≤ c,
(39)
i=1 j∈Ni
subject to k i=1 j∈Ni
xij = 1,
i = 1, 2, . . . , k,
j∈Ni
xij ∈ {0, 1},
i = 1, 2, . . . , k, j ∈ Ni
where pij is the profit of the jth element in the ith class where wij is its weight, was chosen from class i and c is the maximum allowed xij = 1 states that item j capacity. The constraint j∈Ni xij = 1, i = 1, 2, . . . , k ensures that exactly one item is chosen from each class. There are several integer programming techniques to handle such problems. However, the computation cost increases very fast with the problem dimension (i.e., number of independent variables). Such types of problems are known as NP-complete.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
803
3. Introduction to Multiobjective (MO) Optimization MO optimization is a method to find the best compromise between different, usually conflicting objectives. In the previous section, we have seen a single objective function and possible different constraints. In the MO optimization problem, we have a vector of objective functions. Each objective function is a function in decision vector x. The mathematical formulation of the MO optimization problem is Find min{f1 (x), f2 (x), . . . , fm (x)} subject to x ∈ S
(40)
where we have m(≥ 2) objective functions fi : n → , x is the decision (variable) vector belonging to the (nonempty) feasible region (set) S, which is a subset of the decision variable space n , which is to be formulated by the given constraints. The abbreviation “min” means that we want to minimize all the objectives simultaneously. Usually the objectives are at least partially conflicting and possibly incommensurable. This means that, in general there is no single vector x, which can minimize all the objectives simultaneously. Otherwise, there is no need to consider multiple objectives. Because of this, MO optimization is used to search for efficient solutions that can best compromise between the different objectives. Such solutions are called nondominated or Pareto optimal solutions. Definition 1. A decision vector x∗ ∈ S is Pareto optimal, if there does not exist another decision vector x ∈ S such that fi (x) ≤ fi (x∗ ) for all i = 1, 2, . . . , m and fj (x) < fj (x∗ ) for at least one index j (Miettinen, 1998). The Pareto optimal set is a set of all possible (infinite number) Pareto optimal solutions. The condition of optimal Pareto set is rather strict and many MO algorithms can-not guarantee to generate Pareto optimal solutions but only weak Pareto optimal solutions. Weak Pareto optimal solutions can be defined as follows: Definition 2. A decision vector x∗ ∈ S is a weakly Pareto optimal if there does not exist another decision vector x ∈ S such that fi (x) < fi (x∗ ) for all i = 1, 2, . . . , m (Miettinen, 1998). The set of (weak) Pareto optimum solutions can be nonconvex and nonconnected. Figure 4 shows the geometric interpretation of Pareto optimal and weakly Pareto optimal solutions. Note that all points on the line segment between points A and B are weakly Pareto optimal solutions. All points on the curve between points B and C are Pareto optimal solutions. Also the following example illustrates the main concepts of Pareto optimal and weakly Pareto optimal solutions.
March 15, 2010
14:46
804
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
f2(x)
A
weakly Pareto optimal set
F(S)
B Pareto optimal set
C f1(x) Figure 4.
Table 1.
Pareto and weakly Pareto optimal set.
Example for the application of MO optimization.
Solutions
Project cost (K$)
Execution time (months)
Quality
1 2 3 4 5
1000 1500 1100 1000 1050
2 3 1.5 2.5 3
B A B B C
Example. Table 1 shows the results of MO optimization of three objectives. The objectives are minimizing project cost, minimizing execution time, and maximizing the project quality. The project quality is itemized as A (best quality), B (middle quality), and C (worst quality). It is clear that the first solution is not dominated by any other solution. The execution time of the 3rd solution is better than the execution time of the 1st solution, but the project cost of the 1st solution is less. In that sense all 1st, 2nd, and 3rd solutions are Pareto optimal. The 4th solution is weakly Pareto optimal. The 5th solution is not Pareto optimal solution because it is dominated by the 1st solution. After the generation of the Pareto set, we are usually interested in one solution of this set. This solution is selected by the decision maker. In the previous example, the decision maker will select the 2nd solution, if the project
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
805
quality is the most important objective. If the objective is to select the solution with a low execution time as well as low cost, then the 1st solution is preferred. But no one should select the 5th solution because the 1st is better in all aspects. The main point now is how to find the Pareto optimal or even weakly Pareto optimal solutions. There are many techniques to find the (weakly) Pareto optimal solutions (Miettinen, 1998). Using soft-computing methods such as genetic algorithms is one way to solve this kind of problem. In this section, we will concentrate on the analytical solutions of the MO optimization problems. Most of the MO optimization methods are based on converting the MO functions to a single objective problem. Two different MO optimization techniques are discussed in this section. The first method is called the Weighting Method. The Weighting Method transforms the problem posed in Eq. (40) into min
m
λi fi (x)
(41)
i=1
subject to
x∈S
where the tradeoff factors λi satisfy the following: λi ≥ 0,
∀i = 1, . . . , m and
m
λi = 1.
i=1
A weakly Pareto optimal set can be obtained by solving the optimization problem, Eq. (41), for different tradeoff factors values (Miettinen, 1998). The second MO optimization technique is of special interest in the applications of MO optimization in RRS. It is the Weighted Metrics method. If the global solutions of the objectives are known in advance, then the problem in Eq. (40) can be formulated as 1/p m min λi |fi (x) − z∗i |p (42) i=1
subject to
x∈S
where 1 ≤ p ≤ ∞, z∗i is the optimum solution of objective i, and the tradeoff factors satisfy the following λi ≥ 0,
∀i = 1, . . . , m and
m
λi = 1.
(43)
i=1
It is clear that Eq. (42) represents the minimization of the weighted p-norm distance. For p = 2 the weighted Euclidean distance is obtained. With p = ∞ the problem (42) is called weighted Tchebycheff or minmax problem (Miettinen, 1998). The solutions of Eq. (42) depend on the values of p.
March 15, 2010
806
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
The Tchebycheff problem is called minmax because it takes the form min max (λi | fi (x) − z∗i |) i=1,...,m
subject to
(44)
x∈S
In Eq. (42), if p = 1, the sum of weighted deviations is minimized (becomes equivalent to Eq. (41) if z∗i is a global minimum). If p = 2, the Euclidean distance is minimized. When p gets larger, the minimization of the largest deviation becomes more and more important (Miettinen, 1998). Problem (44) is nondifferentiable, which makes the analytical solution not feasible. It can be solved in differentiable form as long as the objective and the constraint functions are differentiable such as min α subject to α ≥ wi (fi (x) − z∗i ) x∈S
∀i = 1, . . . , m,
(45)
We have indicated two simple and efficient methods to solve the MO optimization problems. There are many other methods to solve the MO optimization problems such as Goal Attainment method, Value Function method, Lexicographic Ordering method, Interactive Surrogate Worth Tradeoff method, etc. Many packages to solve the MO optimizations problems are available. Some of them can be downloaded free from the Internet. Optimization toolbox in Matlab contains algorithms for MO optimization such as the Goal Attainment method. Example. A simple two objective optimization problem is given as follows min{f1 (x), f2 (x)}
(46)
where f1 (x) = x2 − 10x + 26;
f2 (x) = x2 − 6x + 9
(47)
We will show how to solve this with p = 1, 2, and ∞. The concept of the tradeoff factors is demonstrated as well. It is easy to find the minima of both objectives as z∗1 = 1 at x = 5; z∗2 = 0 at x = 3. Solving the MO optimization problem (42) with p = 1 and λ2 = 1 − λ1 , we obtain the following optimum solution x∗ = 2λ1 + 3
(48)
At λ1 = 1(λ2 = 0) we obtain the optimum solution of the first objective (x = 5). As the importance of the second objective increases (λ2 > 0) then the optimum solution will move toward the second objective. If both objectives have the same importance then the optimum solution (at p = 1) is x∗ = 4. It is clear that for nondominated solution points an improvement in one objective requires degradation in the other objective.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
807
Solving this simple example with (p = 2 and p = ∞) and when both objectives have same importance, i.e., λ1 = λ2 = 12 , we obtain the same optimum solution x∗ = 4. 4. Introduction to Estimation Theory It is very rare to measure a certain perfect quantity without added noise and distortion. The noise is almost everywhere; so that when we notice a certain signal or quantity (whatever it is), it will be corrupted by noise. Moreover, the signals are observed after another system which may distort the signal. For example, in telecommunication, the received signal is corrupted by thermal noise at the receiver as well as other noise sources received with the required signal. Moreover, the signal will be distorted by the communication channel as well as the electronic equipments of the transmitter and the receiver. This makes the received signal different from the transmitted one. Then, we need to understand the behavior and the characteristics of the noise as well as the channel in order to get a better estimate of the transmitted signal. Similar aspects can be found in almost all branches. Generally speaking, it is useful in system state estimation and model parameter identification for static and dynamic systems. This estimation of system states or system modeling has a special importance in economic theory. If there is interest to know the real value of a certain deterministic parameter (x), but we have only noisy measurement or observation y such as y = αx + n
(49)
where α is distortion random process and n is an additive noise. Sometimes α is called multiplicative noise. One of the estimation theory applications concerns the estimation of the value of x based on the measurement y. If we use previous knowledge or certain assumptions for the pdfs of the random parameters n and α, our estimator is called parametric estimator. If we estimate x without any a priori assumptions for the random parameters, we call it a non-parametric estimator. Generally speaking the parameter x can be random in nature as well. Moreover, all parameters can be multidimensional, i.e., vectors. We usually take many samples from the observation y to achieve a good estimate for our interested parameter x. If our estimator can be represented as function g(·), then the estimation of the parameter x in Eq. (49) can be expressed as xˆ = g(y)
(50)
where xˆ is the estimate of x and y vector of m samples of the observation y. Since the estimation is usually based on random process (y), then we should be able to assess our estimation (how good it is). It is clear that the error in the estimation is e = x − x, ˆ and the average squared error is ˆ 2] E[e2 ] = E[(x − x)
(51)
March 15, 2010
808
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
The average squared error can be rewritten in the following form: E[e2 ] = β2 (x) ˆ + σ 2 (x) ˆ
(52)
ˆ = E[(xˆ − E[x]) ˆ 2 ] is where β(x) ˆ = (x − E[x]) ˆ is the bias of the estimator and σ 2 (x) the estimator variance. An estimator is called unbiased if β(x) ˆ = 0. If the estimator is unbiased, then we should estimate the optimum parameter which will minimize the estimator variance, i.e., the second term of Eq. (52). This estimator is called minimum variance unbiased estimator. An interesting question is, what is the lower bound on the variance of any unbiased estimator? The answer of this question gives us a very useful tool to know the performance of the best estimator that we can find. In other words, it gives us the minimum error of any unbiased estimator, i.e., it is impossible to find better estimation for our parameter. This bound is known as the Cramer-Rao lower bound (CRLB). In a simple case of Eq. (49) when α = 1, x is deterministic scalar, and n is random with probability density function (pdf ) fN (n), then y is also random with pdf fN (y; x). If the data density function satisfies the ∂ ln(fN (y;x)) regularity condition: E = 0, then it can be proven that the minimum ∂x achievable variance (CRLB) is (Kay, 1993)
ˆ ≥ σ 2 (x) E
−1 ∂2 ln(f
N (y;x)) ∂x2
(53)
Since the estimator is unbiased, then from the regularity condition: ∂ ln(fN (y; x)) = I(x)(g(y) − x) ∂x Then it is obvious in this case that 1 σ 2 (x) ˆ ≥ I(x)
(54)
(55)
The minimum variance unbiased estimators are generally not easy to find and compute. However, if the model under study is linear, i.e., the relation between the observation and the parameter of interest is linear such as y = Hx + n
(56)
then it is relatively simple to find the minimum variance unbiased estimation (MVUE) for the parameter vector x. Observe that in Eq. (56) we used vector representation for the required parameters x = [x1 x2 · · · xk ]T and H is m × k matrix representing the linear relation between the parameters of interest and the measurement vector y. It can be easily proven that with m × 1 vector of measurements, H is a known m × k matrix with m > k and rank k, x is k × 1 vector of parameters to be estimated, and n is m × 1 noise vector with independent and identical distributions (iid). The pdf of the noise is Normal (Gaussian) with zero mean and σ 2 , i.e., the
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
809
covariance matrix of the noise is Cxˆ = σ 2 I, where I is the identity matrix, then the MVUE is xˆ = (HT H)−1 HT y
(57)
Cxˆ = σ 2 (HT H)−1
(58)
The covariance matrix of xˆ is
This is the optimum estimator at the given conditions. Example. Assume that the relation between our interested parameters and the observation is given by yt = x1 + x2 t + n, where n is a Normal distribution with zero mean and unit variance. If we measure 10 samples at t = 0, 1, 2, . . . , 9 construct the mathematical model and estimate the parameters. Solution. To show how to construct the model for this simple example, let us do it in some detail such that: y(0) = x1 + n(0), y(1) = x1 + x2 + n(1) , .. . y(9) = x1 + 9x2 + n(9) These equations can be represented in matrix form such as y(0) 1 0 n(0) y(1) 1 1 n(1) x1 = + . . . . . . . x . . . . . 2 y(9) 1 9 n(9) This is the same form as in Eq. (56). For demonstration purpose, we generate 10 samples and then we make the MVUE estimation using Eq. (57). The measurements shown in Fig. 5 as y, the plot using the real parameters is shown as yact and the plot using the estimated parameters is shown as yest . It is clear that the estimation is fairly good. Another very well-known kind of estimator is called Maximum Likelihood Estimator (MLE). It is one of the most practical estimators in many branches. In many practical situations, the minimum variance unbiased estimators do not exist or are very difficult to find. In these situations the MLE gives a relatively simple and asymptotically efficient estimator. If we know the pdf of the observation, we estimate the values of the parameters as that one which makes more probable to have observed the y sample. Thus, the MLE is defined by xˆ ML = arg max fN (y; x) x
(59)
March 15, 2010
810
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati 8 y yact
7
yest
Result
6 5
4 3
2
1
0
1
2
3
4
5
6
7
8
9
Time
Figure 5.
Example for the parameter estimation.
Observe that, in Eq. (59) we maximize over x where y is fixed. The following example shows how to find the MLE estimator. Example. The relation between the measured parameter and the required one is given by y(t) = x + n(t), where x is the parameter which we want to estimate and we know that it is scalar and fixed value. The additive noise or disturbance n(t) is Normal with mean µn and variance σn . Let us assume that we collect m samples, i.e., y(0), y(1), . . . , y(m − 1). Assuming that the noise samples are independent with identical distribution, then the joint pdf of the measurements y is given by (see Eq. (26)): m−1 1 −1 exp (y(i) − µn − x)2 fN (y; x) = (2πσn2 )m/2 2σn2
(60)
i=0
Now, we can maximize the above function with respect to x. To simplify the differentiation, we may take the logarithmic function of Eq. (60). Remember that since ln(•) is a monotonically increasing function then the maximization result will not be affected. Then ln(fN (y; x)) = −
m−1 m 1 ln(2πσn2 ) − 2 (y(i) − µn − x)2 2 2σn i=0
(61)
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
811
By applying Eq. (59) we obtain m−1 1 ∂ (y(i) − µn − x) ln(fN (y; x)) = 2 ∂x σn i=0
m−1 1 y(i) − m(µn + x) = 0 σn2
=
(62)
i=0
And then the MLE for the parameter x is xˆ ML =
m−1 1 y(i) − µn m
(63)
i=0
If the noise has zero mean then it is clear that Eq. (63) represents directly the sample mean. Exercise. Repeat the previous example if σn2 = x. It was assumed in the previous estimators that we know the distribution of the additive noise and also the parameters if they are also random. But how can we estimate the parameters if such knowledge does not exist? For example, when we conduct a new experiment which produces relatively small amounts of data, it is difficult to judge the resultant statistical trends. In this case we cannot build optimum estimator for the required parameters. However, it is still possible to build good estimator. One well-known method is the minimum least square (MLS) method and its different generations. Now, we have only the data without pre-knowledge about the noise and disturbance or parameters pdfs. If we know the data model, i.e., the relation between the parameters and the result, then we find the best parameters which minimize the mean square error. For example, assume that, we have a vector of data y = [y0 y1 · · · ym−1 ], and we know that the data model with time is given as y(t) ˆ = x0 + x1 t + x2 exp(−x3 t), and now we want to estimate the best parameters x = [x0 x1 x2 x3 ]. First, we construct the cost function that we want to minimize. One simple and effective cost function is the total of error squares such as ε=
m−1
(y(i) ˆ − y(i))2
(64)
i=0
Since we know the data model, the previous equation becomes ε=
m−1
(x0 + x1 t + x2 exp(−x3 t) − y(i))2
(65)
i=0
Now, we find the parameters x which minimize the previous cost function. This can be done by differentiating the previous cost function with respect to every parameters xk such as ∂ε/∂xk = 0 for all k = 0, 1, 2, 3. We will have four equations
March 15, 2010
812
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
in four unknowns where we can solve them to find the best parameters. The least square method is effective when the added noise is close to be normal distributed. If there are impulses in the noise, this norm-2 is not good because it will amplify those impulses and the estimator will be affected by those impulses even if their number is small. It is better in this situation to use norm-1, i.e., the cost function becomes ε=
m−1
|y(i) ˆ − y(i)|
(66)
i=0
The problem now is that we do not have a smooth function. The previous cost function cannot be differentiated to find the best parameters. The suggested solution is to use a smooth function which can be approximated as norm-1. It is possible also ˆ − y(i)|). to find the estimation based on the worst error vector, i.e., ε = maxi (|y(i) This is also called norm-infinity. Why? ˆ Example. Find the MLS formulation if the data model is linear, i.e., yˆ = Hx. It is evidence that Eq. (64) can be represented in matrix form such as: (check it!) ε=
m−1
(y(i) ˆ − y(i))2 = (yˆ − y)T (yˆ − y) = (Hxˆ − y)T (Hxˆ − y)
(67)
i=0
Differentiating with respect to x we obtain ∂ε = 2HT (Hxˆ − y) = 0 ∂x
(68)
HT Hxˆ − HT y = 0 ⇒ xˆ = (HT H)−1 HT y
(69)
From above
It is interesting to observe that the MLS solution for linear model gives a result identical to the minimum variance unbiased error (MVUE) estimator given in Eq. (57). Remember that Eq. (57) is the optimum MVUE estimator only under certain conditions such as the model is linear, the noise samples have iid Gaussian distribution ˆ = x. However, when we derive the MLS estimator, we with zero mean and E[x] did not care about all these details. Hence, our MLS estimator is good when the real data model is close to the previous conditions. Otherwise, the estimation error can be very large. However, in many situations, this is the best estimation we can do. Another question is that what if we do not know the data model. One method is to guess the most appropriate mode (polynomial, trigometric, exponent, hybrid, etc). Then, we may tune the model details (such as the polynomial order) to obtain the best estimate (the least error), thanks to the available software packages which can find the best model for us. Many of the time series models are based on the same concepts given above.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
813
5. Introduction to Game Theory No doubt that game theory is one of the most attractive and effective theories in the field of economy science. One indication for this importance is that, eight game theorists have won Nobel prizes in economics. Moreover, game theory is widely used in social system analysis, engineering, bio-modeling, and many other branches. However, there are several scientists who criticize the game theory, that it is just a part of optimization problem. This could be true, but still the way of looking to the problem is the novel thing which makes the theory very productive. Similar things happened in signal processing area. For example when neural networks were introduced, many scientists and engineers said that it is just a form of the well-known adaptive filters. This was correct, but the way of looking at these nodes as neuron cells gives it a great push in the analysis as well as the applications. Game theory can be defined as the study of mathematical models of conflict and cooperation between intelligent, rational decision makers; one can say that, game theory is one form of conflict analysis (Myerson, 1997). In this chapter, we introduced some concepts of single and multiobjective optimization theory. In those theories we try to optimize decision variables to obtain the best tradeoff between our objectives. But, how can we optimally adjust the decision variables, if this adjustment will affect other parties’ objectives and their adjustments also affect our objectives? In other words, our objectives depend also on decision variables which are set by other parties. The problem is clear when the objectives (ours and others) are at least partially conflicted, and usually it is. Every individual (human, animal or even a primitive organism such as virus), group (such as society, country, etc.), or activity (such as business) has certain objectives they want to achieve. These objectives can be utility or reward to maximize, or cost or risk to minimize, or both. Since these objectives usually interfere with other objectives and also other objectives interfere with ours, then how to select the optimum decision variables. If we assume that we have a certain reward function (x, y). This reward function is now a function of two types of decision variables. Our decision variables x = [x1 x2 · · · xn ] and others decision variables y = y11 · · · yk11 , . . . , y1t · · · ykt t , where we have t different competitors (players) affecting our objectives. Every
player i has ki different decision variables that can be set. We may assume that ti=1 ki = m. Observe that if y is not there, i.e., y = [ ], then the problem is reduced to the conventional maximization problem. The game under these concepts consists of different bases such as: game rules, players (assumed rational and intelligent), game objectives, and payoffs. Moreover, the players are decision-makers. The initial step we need to take is to build a proper model for the game. Generally, the most accurate model is the most complex one. Because when one needs to build a very accurate model, many issues should be considered. Complex models are usually undesirable because they need high computational power and also some important characteristics and features may become less visible in the complex model. For these reasons, the best model is that one which shows the required characteristics with enough
March 15, 2010
814
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
M. S. Elmusrati
accuracy at acceptable complexity. Some of the well-known models of games are Strategic and Intensive forms (Myerson, 1997). There are two main types of games. Constant Sum Games: This type is usually more aggressive, because players play for fixed sum, let us call it C. If a player gets a, then other players have only C − a, so that every player may play to maximize his/her payoff. One special case is zero sum game where C = 0. In this case, whatever a player gets, it will be paid by other players. For example if we assume two players, if one player gets $a, then the other player pays $a (or gets − $a). For zero sum games with two players, if player 1 payoff function is (x, y) which he/she wants to maximize, the other player payoff function will be − (x, y) which he/she also wants to maximize. Maximizing − (x, y) leads to same result as minimizing (x, y). In other words, player 1 will try to maximize (x, y) and player 2 to minimize it! It has been shown that there is always an equilibrium point (x∗ , y∗ ) where min max (x∗ , y∗ ) = max min (x∗ , y∗ )
y∈Sy x∈Sx
(70)
x∈Sx y∈Sy
This point is called equilibrium because it satisfies
(x, y∗ ) ≤ (x∗ , y∗ ) ≤ (x∗ , y)
∀ x ∈ Sx , y ∈ Sy
(71)
x∗
If player 1 decides to select decision variables rather than but player 2 still uses y∗ , then player 1 receives less payoff (on average) than that obtainable by playing using x∗ . A similar statement holds for player 2. So, (x∗ , y∗ ) is equilibrium in this sense (Barron, 2008). Non-Zero Sum Games: This type is more realistic in economics where players can all win or all lose something. This type differs from the previous one in that, each player may have his/her own utility function that wants to maximize. Of course, all utility functions are at least partially shared on the decision variables. Assume that we are player number 1 and there are also t different players in the same game. Assume that other players select certain decision vector such as y0 , then the selection of our decision vector should maximize our utility or reward function such as x0 = arg max x (x, y0 )
(72)
x∈Sx
and every player will do the same. Observe that, now every player has a utility function which may differ from others. At the Nash equilibrium point (x∗ , y∗ ) where y∗ = [y1∗ y2∗ · · · yt∗ ], and yi∗ = [y1∗ · · · yk∗i ] we have
y1 (x∗ , [y1∗
y2 (x∗ , [y1∗
y2∗ y2∗
yt (x∗ , [y1∗
y2∗
x (x∗ , y∗ ) ≥ x (x, y∗ ) · · · yt∗ ]) ≥ y1 (x∗ , [y1 y2∗ · · · yt∗ ]) ≥ y2 (x∗ , [y1∗ y2 .. . ∗ · · · yt ]) ≥ yt (x∗ , [y1∗ y2∗
for any feasible decision variables ∀x ∈ Sx , y ∈ Sy .
··· ···
yt∗ ]) yt∗ ])
···
yt ])
(73)
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
Preliminaries of Mathematics in Business and Information Management
815
The games can be cooperative or noncooperative. The problem of cooperative games is to agree about the fair allocation of the resultant benefits from the game. Each player may have special fairness measure for the allocation. Usually cooperative games give better payoffs to players than noncooperative. Moreover, players can improve their payoffs even more by bargaining. However, it should be noted that some games are by nature should be noncooperative. Also, some players do not like to play cooperative games when they are in a strong position. Selection between noncooperative, cooperative, and bargaining is a game by itself! One interesting and easy to read book in game theory is by Barron (2008). References Barron, EN (2008). Game Theory: An Introduction. Wiley. Kay, S (1993). Fundamentals of Statistical Signal Processing. Prentice Hall. Miettinen, K (1998). Nonlinear Multiobjective Optimization. Boston: Kluwer Academic Publishers. Myerson, R (1997). Game Theory: Analysis of Conflict. Harvard Press.
Biographical Note Mohammed Salem Elmusrati received his BSc, (with honors) and MSc (with high honors) degrees in telecommunication engineering from the Electrical and Electronic Engineering Department, Garyounis University, Benghazi, Libya, in 1991 and 1995, respectively, and the Licentiate of Science in technology (with distinction) and the Doctor of Science in Technology degrees in control engineering from Helsinki University of Technology (HUT), Finland, in 2002 and 2004, respectively. He was lecturer in Electrical and Electronic Engineering Department, Garyounis University from 1995 to 1999. From September 1999 to July 2004, he was a researcher at Control Engineering Laboratory – Helsinki University of Technology. Next, he obtained a lecturer position in the Department of Computer Science at University of Vaasa in Finland from August 2004 to July 2007. Since August 2007, he has been Professor and Head of the Telecommunications Engineering Group at the University of Vaasa. Helsinki University of Technology has granted him an Adjunct Professor position at the Automation Department for the period 2007–2012. His main research area during this period will be related to wireless automation. He has published more than 50 peer-reviewed papers and reports. His main research interests include radio resource management in wireless communication, smart antennas, wireless automation, optimization techniques, game theory, ultra-wide-band (UWB), and data fusion.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch33
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Chapter 34
Herding Does Not Exist or Just a Measurement Problem? A Meta-Analysis NIZAR HACHICHA∗ , AMINA AMIRAT†,∗∗ and ABDELFETTAH BOURI‡ University of Economics and Management of Sfax P.O. Box 1088, 3018 Sfax, Tunisia ∗ hachicha [email protected] † [email protected] ‡ [email protected]
The topic of herd behavior is at the heart of wide-ranging debate in the growth literature. While there is an increasing interest in the attention paid by financial markets to imitation phenomenon, the empirical results of studies performed within different markets still oscillate between the existence or non-existence of this bias. We use four different methodologies to test whether returns do not behave as predicted by the capital asset pricing model in the Toronto stock exchange (TSX). We also contribute by applying a meta-analysis to existing studies estimating the impact of different measures on the validation of herding. We find that the lack of herding evidence is due to the used measures that present several shortcomings and cannot be directly applied to any stock market. Keywords: Herd behavior; meta-analysis; effect size; publication bias.
1. Introduction Behavioral finance theory uses herding to describe the correlation in trades ensuing from investors’ interactions. This concept suggests that it is reasonable for less sophisticated investors to imitate market gurus or to seek advice from victorious investors, since using their own information will incur less benefits and more cost. The consequence of this herd behavior is, as Nofsinger and Sias (1999) noted, “A group of investors trading in the same direction over a period of time”. Empirically, this may lead to observe behavior patterns that are linked across individuals and that fetch about systematic, erroneous decision-making by all populations (Bikhchandani et al., 1992). So, in addition to news per se, investors trading behavior can cause stock prices to deviate from their fundamentals. As a result, stocks are not appropriately priced. ∗∗ Corresponding author.
817
March 15, 2010
818
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
There are varieties of herding models that have been presented. Generally, we separate models that produce rational prices in efficient markets from those that can potentially give rise to price bubbles and crashes that are due to temporary price pressures pushing market prices away from fundamental values. The main rational models are those of Froot et al. (1992) and Hirshleifer et al. (1994). These models attribute herding to investors following the same sources of information. So, investors trade rationally in response to new information and the resulting herding has a contemporaneous price impact as price pressures shift prices from some initial level towards their fundamental value. However, in both models, herding finishes when the information is fully reflected in prices. The irrational models are those from the literature on fads and cascades. Some early works on fads in security markets are published by Dreman (1979), Friedman (1984), and more recently by Barberis and Shleifer (2003). A variation on the work on fads is the informational cascades model presented by Banerjee (1992) and Bikhchandani et al. (1992). Essentially, these studies hypothesize that certain assets become popular for non-informational reasons and that investors may simultaneously search for acquiring large holdings of these assets. The joint action of many investors creates momentary price pressures that can drive prices up to unrealistically high levels so long as these stocks stay in vogue. When an asset ceased to be in vogue, their prices may fall precipitously. The rise and fall of prices may not be associated with any relevant information in the market. An important feature of these models is that individual agent behavior is completely rational, but the framework in which agents operate can produce prices that do not correspond to the fundamental value. Avery and Zemsky (1998) analyze the theoretical relations between herding (an informational cascade) and the informational efficiency of the market. They consider a context where each agent receives an independent, but noisy, signal of the true value of a financial asset. Herding occurs if the agents ignore their signal and decide to buy or sell based on the trend in past trades. There is another stream in herding literature review which is based on a compensation-reputation scheme. So, an investor’s compensation depends on how his/her performance compares to other investors’performance and on whether deviations from the consensus are potentially costly (Brenan, 1993; Maug and Naik, 1996; Rajan, 1994; Roll, 1992; Scharfstein and Stein, 1990; Trueman, 1994). Considered as a non-quantifiable behavior, herding cannot be directly measured but can only be inferred by studying related measurable parameters. The studies conducted so far can be generally classified into two categories. The first category focuses directly on the trading actions of the individual investors. Therefore, a study on herd behavior would require detailed and explicit information on the trading activities of the investors and the changes in their investment portfolios. Examples of such herd measures are the LSV measure by Lakonishok et al. (1992) and the PCM measure by Wermers (1995). In the second category, the presence of herd behavior is indicated by the group effect of collective buying and selling actions of the investors in an attempt to follow
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 819
the performance of the market or some factor. This group effect is detected by exploiting the information contained in the cross-sectional stock price movements. Christie and Huang (1995), Chang et al. (2000) and Hwang and Salmon (2001, 2004, 2006) are contributors of such measures. The main objective of this study is to ascertain the existence of herding in specific markets. In fact, at the moment we are not interested in its origin: First we ought to detect herding before we can analyze it. In this chapter, we study herd behavior in the Toronto Stock Exchange (TSX). Some characteristics of the TSX suggest that herding could take place there. Despite rapid growth in trading volumes and market capitalization during the past decade, TSX cannot be characterized as being a particularly efficient market. Apart from energetic corporations, the publicly traded companies in the TSX are small in terms of market capitalization and number. So, it can be room for informational asymmetries that trigger herd behavior in the stock exchange. Our methodology is subdivided into two parts. First, we apply to our data the main four measures of herd behavior of Lakonishok et al. (1992), Christie and Huang (1995), Chang et al. (2000) and Hwang and Salmon (2004). We use monthly stock returns from January 2000 to December 2006. Our findings indicate evidence of herding using LSV (1992) and HS (2004) while basing on cross-sectional dispersion shows that the TSX does not exhibit a herding effect. Second, we generalize herding results in the framework of the herding literature review using a meta-analysis for published and non-published works. At the end we resume the shortcomings of the existing herding measures in the framework of our database. The remainder of this chapter is organized as follows: the second section summarizes the main studies in financial literature that investigates the existence of herd behavior in various stock markets. The third section presents our database and the used methodology by describing the four measures of herding. In the fourth section, we report the results of the meta-analysis. Finally, the last section concludes this chapter. 2. Herding Literature Review Researchers have devoted extensive effort to exploring the investment behavior of investors. Previous evidences indicate herding is a common behavior in the capital market. Asch (1952) concludes that an individual rationally takes into account the information revealed by others’ actions. These findings are further reinforced by Jost (1995) that the tendency for people in groups to think and behave similarly seems to suggest some kind of irrationality, such as a psychological motivation to be in accord with group members. Thus, they tend to observe others before making their own decisions. Banerjee (1992) and Bikhchandani et al. (1992) affirm that people acquire information in sequence by observing the actions of other individuals in their group who preceded them in the sequence. Banerjee (1992) illustrates that people would be doing what others were doing, even though their own information suggested
March 15, 2010
820
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
doing something quite different. On the other hand, Tvede (1999) indicates that human beings would use the behaviors of others as a source of information about a subject. Lakonishok et al. (1992) examined the impact of institutional trading on stock prices. The results revealed only weak evidence of herd decisions by institutional investors among small capitalization stocks. However, no evidence of herding was found among large capitalization stocks. Since the pioneer work of Lakonishok et al. (1992), this measure has been widely applied by several researchers. Grinblatt et al. (1995) discovered only feeble evidence that funds have a tendency to sell and buy the same stocks simultaneously. Wermers (1999) provides an extensive analysis of the mutual fund industry. He finds that “stocks that herds buy outperform stocks that they sell by 4 percent during the following six months.” Nofsinger and Sias (1999) concentrate on the divergent paths of both herding and feedback trading literature. They define herd behavior as a change in the portion of stocks held by institutions even though the variation in ownership will come from one great institutional buy instead of many little buys. Coval and Moskowitz (2001) regress a herding measure on ownership variables and find that “there is a strong inverse relationship between herd activity and geographic proximity.” Hong et al. (2002) propose the idea that word-of-mouth communication influences investors’ trading decisions. The authors show that buying/selling is highly correlated within a region. Moreover, Kim and Wei (2002a, b) document powerful herd behavior by foreign investors and off-shore investment funds in Korea during a same time period. Grinblatt and Keloharju (2000) demonstrate that foreigners who buy shares in Finland have net trades that are positively correlated with past and future returns. That is, the far investors seem better informed in their sample. Using monthly return data from the period January 1990 to October 2000, Hwang and Salmon (2001) find evidence of herding towards the market portfolio in the United States during the period from January 1996 to July 1998, and in the United Kingdom between June 1997 and September 1998. They find that the propensity to herd during quiet periods is higher than that during periods of crisis, with the lowest levels of the herding measure obtained prior to the Russian Crisis of 1998 and the Asian Crisis of 1997. Gleason et al. (2003) use intraday data to examine whether traders herd during periods of extreme market movements. Gleason and Lee (2003) use the models proposed by both CH and CCK and find no evidence of herding, indicating sufficient information exists within the ETF sector for investors to make informed decisions. Demirer and Kutan (2006) employ CSSD methodology on both individual and sector-level data in China and found that herd formation does not exist in Chinese markets. Herd behavior tests have also been applied to exchange traded funds and futures markets. Weiner and Green (2004) apply both parametric and nonparametric methodologies and discover little evidence of herding in heating oil and crude-oil futures.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 821
Furthermore, Blasco and Ferreruela (2007a) propose daily CSSD measure which is notably lower in the Spanish market over the whole market return range when some familiar stocks are analyzed in the international context. Cajueiro and Tabak (2007) use daily data of Japanese individual stocks from January 2000 to February 2006. They find negative and significant coefficients only for down markets, suggesting that the presence of herd behavior only for bear markets. A recent study of Chinese stock markets by Tan et al. (2008) reports that herding occurs under both rising and falling market conditions and is especially present in A-share investors. Hachicha et al. (2008) employ a new dynamic measure of herding and find evidence of this bias on the Tunisian stock exchange. Thus, the evidence from the studies cited above shows that most herd behavior is revealed in emerging markets and not in advanced markets. 3. Empirical Evidence 3.1. Database In this study, we are testing for herding in the Toronto market on the premises of its main index, the S&P/TSX60. The latter is a value-weighted index including the 60 most liquid stocks selected on the premises of their participation in the market’s turnover (number of transactions and trading value) and was officially launched on December 31st 1997. We chose the top-capitalization index of the market in order to mitigate against thin trading which lead to errors in empirical estimations in finance. Our data includes monthly stock returns and trading volume both for the S&P/TSX60 as well as its constituent stocks and covers the period from January 2000 until December 2006, so we have 84 observations. The historical constituent lists for the S&P/TSX60 were obtained from the website www.investcom.com. The monthly data is employed in consideration of the herd’s repulse in longer time horizons to affect stock market prices. Hence, the use of daily data unfairly restricts the ability of herd behavior to manifest itself in dispersions during periods of market stress (Christie and Huang, 1995). 3.2. Methodology Recently, herding has been generally known as a behavior describing the reaction of individual investors to public news, particularly when facing large price changes or an excessive volatility. Research in financial literature can be separated into two categories. The first is a rational approach proposed by Bikhchandani et al. (1992) and Scharfstein and Stein (1990) and the second is the irrational approach advocated by Devenow and Welch (1996). In reply to the above two theoretical approaches, empirical studies have mostly focused on detecting the existence of herd behavior. Considered as a
March 15, 2010
822
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
non-quantifiable behavior, herding cannot be directly measured but just be inferred by studying linked measurable parameters. Research conducted so far can be generally divided into two categories. The first focuses directly on the trading actions of individual investors. The most important essays on such herd measures are the LSV measure by Lakonishok et al. (1992) and the PCM measure by Wermers (1995). The second tries to detect herding by using the contained information in the crosssectional stock price movements. Christie and Huang (1995), Chang et al. (2000) and Hwang and Salmon (2001, 2004) are contributors of such measures. To extend a sophisticated test, there are basically two steps to go through. First one must elaborate a theoretical model, which allows for herding. Second, this model has to be translated into an empirical version which can be fed with historical data; such an empirical model is suited for statistical tests. 3.2.1. Lakonishok et al. (1992) measure Considering the importance of the capital managed by institutional investors, several works are devoted to the study of the herd behavior of fund managers. Lakonishok et al. (1992) propose a statistical measure of herd behavior (hereafter LSV) as an average tendency of a group of fund managers to buy or sell particular stocks at the same time, compared to the situation where everyone acts independently. The LSV measure is based on transactions of a subgroup of participants during a given period. It is founded on the difference between the observed probability that a given stock can be sold (or bought) by a group fund manager during a given quarter and the probability that the same stock is sold (or bought) if every manager acts independently. Authors include an adjustment factor in order to correct the measure bias of the couple stock-quarter where assets are not traded by a sufficient number of participants. When this measure is significantly different from zero, we can confirm the existence of herd behavior. LSV define the herding measure, Hi,t , for stock i and period t as follows: Hi,t = |pi,t − E[pi,t ]| − AFi,t
(1)
where pi,t is the proportion of managers who had a net purchase in stock i during period t: pi,t =
Nbr institutions buyingi,t (Bi,t ) Nbr institutions buyingi,t + Nbr institutions sellingi,t (Si,t )
(2)
and AFi,t = E[|pi,t − E[pi,t ]|] is the adjustment factor. Under the null hypothesis of no herding, we expect Hi,t to be insignificantly different from zero. LSV (1992) study the behavior of 796 American fund managers who trade on stock market handled by 341 different managers in order to empirically test the herd behavior. LSV find an average herding measure equal to 0.027, i.e., if the expected proportion of buyer managers is 50%, the 52.7% of fund managers change their
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 823
portfolio to a given direction and 47.3% in the opposite sense. The median is even weaker, 0.01, which implies that there is no herding in the couple stock-quarter. 3.2.2. Christie and Huang (1995) Efficient market models expect that stocks vary in their sensitivity to market fluctuations and that market stress provokes large price dispersion of individual stocks. Conversely, if investors herd on market consensus rather than trading on firm particular information, the dispersion will be reduced to a lower level during stress periods than during non-stress periods. Cross-section standard deviation (CSSD) proposed by Christie and Huang (1995) (hereafter CH) is used to measure herd behavior. Christie and Huang (1995) examined the investment behavior of market participants in the US equity markets. They argued that, when herding occurs, individual investors usually suppress their own information and valuations, resulting in a more uniform change in security returns. Therefore, they employed a cross-sectional standard deviation of returns (CSSD) as a measure of the average proximity of individual asset returns to the realized market average. N 1 (Ri,t − Rm,t )2 (3) CSSDt = N −1 i=1
where Ri,t is the observed stock return on firm i at time t and Rm,t is the crosssectional average of the N returns in the aggregate market portfolio at time t. Christie and Huang (1995) argue that rational asset pricing models predict that the dispersion will increase with the absolute value of the market return since individual assets differ in their sensitivity to market returns. On the other hand, in the presence of herd behavior, security returns will not deviate too far from the overall market return. This behavior will lead to an increase in dispersion at a decreasing rate, and if the herding is severe, it may lead to a decrease in dispersion. Christie and Huang (1995) empirically examine whether equity return dispersions are significantly lower than average during periods of extreme market movements. They estimate the following empirical specification: CSSDt = α + βL DtL + βU DtU + εt
(4)
where DtL = 1, if the market return on day t lies in the extreme lower tail of the distribution; and equal to zero otherwise, and DtU = 1, if the market return on day t lies in the extreme upper tail of the distribution; and equal to zero otherwise. The dummy variables aim to capture differences in investor behavior in extreme ups or downs versus relatively normal markets. Negative βL means that investors herd around the market performance when the return trend is extremely negative, the downside; and, negative βU , the upside. Positives βs will mean a contradiction. Using daily and monthly returns on US equities, Christie and Huang (1995) find a
March 15, 2010
824
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
higher level of dispersion around the market return during large price movements, evidence against herding. 3.2.3. Chang et al. (2000) Chang et al. (2000) (hereafter CCK) extend the model of Christie and Huang and introduce a new and more powerful approach, cross-sectional absolute deviation of returns (hereafter CSAD) model: CSADt =
N 1 |Ri,t − Rm,t | N
(5)
i=1
The method formulated by Chang et al. (2000) is based on general quadratic relationship between CSADt and Rm,t of the form: CSADt = α + γ1 |Rm,t | + γ2 R2m,t + εt
(6)
The estimator γ2 is designed to capture trader behavior differences during market stress periods. In line with the rational asset pricing model, equity return dispersions are rising and linear functions of the market return. Chang et al. (2000) state if investors herd during periods of relatively large price swing, the average market return and CSAD will be inverse and non-linear relations. In other words, in Eq. (6), if coefficient γ2 is negative and is statistically significant, then the market participants herd when the market is up or down. So, if herding is present, then the non-linear coefficient γ2 will be negative and statistically significant; otherwise a statistically positive γ2 indicates no evidence of herding. Chang et al. (2000) examine monthly basis data of several international stock markets, and find no evidence of herding in developed markets, such as the United States, Japan, and Hong Kong. However, they do find a significant non-linear relationship between equity return dispersion and the underlying market price movement of the emerging markets of South Korea and Taiwan providing evidence of herding. 3.2.4. Hwang and Salmon (2004) Hwang and Salmon (2004) (hereafter HS) developed a new measure in their study of the US and South Korean markets. This model is price-based and measures herding on the basis of the cross-sectional dispersion of the factor sensitivity of assets. More specifically, HS (2004) argued that when investors are behaviorally biased, their perceptions of the risk-return relationship of assets may be distorted. If they do indeed herd towards the market consensus, then it is possible that as individual asset returns follow the direction of the market, so CAPM-betas will deviate from their equilibrium values. In the event of herding prevailing in the market, the cross-sectional dispersion of the stocks’ betas would be expected to be smaller, i.e., asset betas would tend
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 825
towards the value of the market beta, namely unity. It is on these very premises that their herding measure is based. More specifically, they assume the equilibrium b ), whose relationship beta (let βi,m,t ) and its behaviourally biased equivalent (βi,m,t is assumed to be the following: b = βi,m,t − hm,t (βi,m,t − 1) Eb (Ri,t )/E(Rbm,t ) = βi,m,t
(7)
where Eb (Ri,t ) is the behaviorally biased conditional expectation of excess returns of security i on period t, E(Rbm,t ) is the behaviorally conditional expectation of excess returns of market at time t and hm,t is a time-variant herding parameter (hm,t ≤ 1). After several deviations, HS obtain the standardized beta herding: ∗ Hm,t
Nt bi,m,t − 1 2 1 = Nt σˆ εi ,t /σˆ m,t
(8)
i=1
where σˆ m,t is the sample standard deviation of market returns at time t and σˆ εi ,t is the sample standard deviation of the OLS residuals. Hwang and Salmon (2004) assume that the herding parameter follows an AR(1) process and their model becomes Hm,t = φm Hm,t−1 + ηm,t
(9)
2 where ηm,t → iid 0, σm,η . 2 If σm,η = 0, then Hm,t = 0 and there is no herding. Conversely, a significant 2 , would imply the existence of herding and (as the authors state) this value of σm,η would further be reinforced by a significant φm . The absolute value of the latter is taken to be smaller than or equal to one, as herding is not expected to be an explosive process. 3.3. Empirical Results The major objective of this research is to assess if firms in TSX exhibit herding and to what extent. For this reason we fed the four measures of herding to our historical data. 3.3.1. Empirical results of LSV (1992) measure In order to calculate the LSV measure, we assume that Bi,t (Si,t ) is the number of stocks whose returns are positive (negative); E[pi,t ] is calculated as the average of pi,t overall stocks i in month t.
March 15, 2010
826
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
The adjustment factor AFi,t is calculated using a binomial distribution under the hypothesis of no herding:
AFi,t
N
N
Bi,t Bi,t N−Bi,t
= ] × C ] × (1 − E[p ]) × E[p − E[p i,t
i,t i,t Bi,t
N Bi,t =0
(10) For each month, we calculate the Hi,t measure according to Eq. (1). Our first assessment will be to analyze the evolution of this measure during the 84 months of our sample. This evolution is depicted in Fig. 1. In the next step, the herding measures are computed in each stock-month. Results are reported in Table 1. In Table 1, we present the overall levels of herding exhibited by our sample, for the all period from 2000 to 2006. The herding measure of 11.09% shown in Table 1 is the Lakonishok et al. (1992) measure of herding computed over all stock-month during the 6-year period. This positive and significant measure can be interpreted as meaning that, if 100 investors trade a given stock, then approximately 11 more investors trade on one side of the market than would be expected if there was no positive feedback trading between traders. In other words, if the number of changes in holdings was, a priori, equally balanced between positive and negative changes, .5 .4 .3 .2 .1 .0 -.1 -.2
2000
2001
2002
LSV
2003
2004
2005
2006
RTSX60
Figure 1. This figure shows the overall evolution of the LSV measure in our sample. This later seems to follow a cyclical herding pattern and the herding measure does not vary much across the 6-year sample. The figure shows a significant magnitude of herding around bearish periods, when market returns are in extreme lower tails. This result is consistent with our image of investors being frantic to herd in an “abnormal” circumstance.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 827
Table 1. Estimation of LSV measure. This table reports the average herding measure with the corresponding t -student for the entire sample. Hi,t Mean 0,1109
t-student 9,310a
a Denotes significance at 5% level.
61.09% (50% + 11.09%) of investors trade in one direction and the remaining 38.91% (50%–11.09%) trade in the opposite direction. The overall level of herd behavior we find is much superior to that reported in previous studies using UK and US mutual and pension fund data. The overall level of herding in our study is close to what has been reported by Choe et al. (1999) for their study on the herd behavior of foreign individual investors in the Korean stock market (they find no herding measure below 20%), the study of Laboa and Serra (2007) who find an average of 11.38% of herding measure in Portugal stock exchange and the work of Bowe and Domuta (2004) on Jakarta stock exchange. 3.3.2. Empirical results of CH (1995) measure We compute the cross-sectional standard deviation as defined in Eq. (3) in Fig. 2. CSSDt
juil-06
janv-06
juil-05
janv-05
juil-04
janv-04
juil-03
janv-03
juil-02
janv-02
juil-01
janv-01
juil-00
janv-00
0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0
Figure 2. Represents the evolution of cross-sectional standard deviation. This figure unveils the CSSD for the market index over the sample, which varies substantially in some periods. We record three major jumps during our sample period corresponding to February 2000, April 2002, and July 2003. Dispersions appear to be medium which reflect the absence of herd behavior.
March 15, 2010
828
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
But this cannot by them be a guarantee of the absence of herding. So, we test an additional model to obtain a more comprehensive analysis (Fig. 3): CSSDt = α + γ1 |Rm,t | + γ2 R2m,t + εt .
(11)
Focusing on the areas where realized average monthly returns were negative and positive, the estimated coefficients and the corresponding t-statistics for our model (Eq. (11)) are reported in Table 2. Looking at the results of Eq. (11) presented in Table 2, with CSSD as the dependent variable, we see that the linear term is positive and not similarly significant to γ2 . This result points to the absence of herding during periods of high market stress. This finding indicates that herding is not a phenomenon that characterizes the TSX. On the contrary, the results indicate that during periods of market stress, 0.5
CSSD
0.4 0.3 0.2 0.1 0 -0.2
0
0.2
TSX60
Figure 3. Represents the relationship between the monthly cross-sectional standard deviation and the corresponding market return (TSX60) for Toronto (January 2000– December 2006). We plot the CSSD measure for each month and the corresponding market return for Toronto using stock return data over the period from January 2000 to December 2006. The CSSD-market return relation does indeed appear to be non-linearly positive. Table 2. Regression of herding measure. This table reports the regression results with the independent variables equal to the absolute return on the market index and the squared market return. TSX60 α γ1 γ2
0.136884 (13,2316)a 0.078965 (0.403390) 2.659808 (0,949972)
a Denotes statistical significance at 5% level.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 829
investors trade away from the market consensus as proxies by the TSX60. Hence, the prediction of rational asset pricing model has not been violated. The results in Table 2 are similar to results reported by Christie and Huang (1995) and Chang et al. (2000), who also do not find evidence of herding in US equity markets. 3.3.3. Empirical results of CCK (2000) measure At first, we calculate the CSAD as defined in Eq. (5). The evolution of this measure is shown in Fig. 4. According to results shown in Table 3, with CSAD as the dependent variable, we see that the linear term is positive and nonsignificant, while the other is significant and positive. This result points to the absence of herding during periods of CSADt 0,25 0,2 0,15 0,1 0,05
06
6
ilju
-0 nv
ja
5
05
-0
ilju
nv ja
4
04
-0
ilju
nv ja
3
03
-0
ilju
nv ja
2
02
-0
ilju
nv ja
1
01 il-
-0
ju
nv ja
ilju
ja
nv
-0
0
00
0
Figure 4. Represents the evolution of CSAD. This graph unveils the CSAD for the market index over the sample, which varies substantially in some periods. We record three major jumps during our sample period corresponding to February 2000 and 2001. Dispersions appear to be medium which reflect the absence of herd behavior. But this cannot be a guarantee of the absence of herding. Table 3. Regression of herding measure. This table reports the regression results with the independent variables equal to the absolute return on the market index and the squared market return. TSX60 α γ1 γ2
0.075521 (13.05958a ) 0.013214 (0.053109) 4.890322 (2.242475a )
a Denotes statistical significance at 5% level.
March 15, 2010
830
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
high market stress. This finding indicates that herding is not a phenomenon that characterizes the TSX. 3.3.4. Empirical evidence of HS (2004) measure In this subsection, we elaborate the empirical test of herding along the lines of Hwang and Salmon (2004). This test allows detecting herd behavior in the Toronto stock market (Fig. 5). As Table 4 illustrates, both the persistence parameter (φm ) as well as the standard deviation (σm,η ) of the state-equation error (ηm,t ) are statistically significant. These .15
70
.10
60
.05
50
.00
40
-.05
30
-.10
20
-.15
10 0
-.20
2000
2001
2002
2003
2004
RTSX60
2005
2006
HS
Figure 5. Presents the evolution of herding diagrammatically. We observe that herding appears to present us with a multiplicity of short-lived fluctuations; it also assumes more distinctive (smoother) directional movements. As the figure also illustrates, herding assumes values well above unity (between 1.0383221 and 45.6360723), which indicates that extreme degrees of herding towards the Toronto Index were observed during our sample period. Table 4.
Regression of herding measure.
TSX60 µm φm σm,υ σm,η b )] σm,η / log[Stdc (βi,m,t a Denotes statistical significance at 5% level.
−0.0039159 (2.0070163a ) 0.796223989 (5.7626789a ) 0.089301848 (2.250574a ) 0.03148151 (3.16108a ) 0.431773951
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 831
results thus indicate the presence of significant herding towards the Toronto Index during our sample period. The bottom row of Table 4 provides us with the signal-proportion value (calculated by dividing σm,η ) by the time series standard deviation of the logab rithmic CSSD of the betas, (log[Stdc (βi,m,t )]), which according to Hwang and b )] Salmon (2004) indicates what proportion of the variability of the log[Stdc (βi,m,t is explained by herding. As Hwang and Salmon (2004) showed empirically in their paper, the bigger the value of the signal-to-noise ratio, the less smoothly over time herding evolves. Our results show us that the signal-proportion value is about 43 percent. The findings of Table 4 appear robust to the inclusion of market direction and market volatility in the original Hwang and Salmon (2004) model. After applying the four measure of herd behavior on TSX, we find contradicting results. The LSV and HS measures give evidence of herd behavior, while the CSSD and CSAD confirm the absence of this bias. 4. Generalization of Herd behavior in the Literature Review: Meta-Analysis Statistically, the simplest and most straightforward meta-analysis could be done if we have data from several studies, which contain exactly the same groups or treatments and administered under exactly the same conditions. In our study, we performed a meta-analysis on four groups of studies. Each group focused on one herding measure: LSV measure of Lakonishok et al. (1992), CSSD measure of Christie and Huang (1995), CSAD measure of Chang et al. (2000), and HS measure of Hwang and Salmon (2004). 4.1. Literature Sampling To review the empirical studies on herding measurement, we examine four computerized databases: ABI/INFORM Global and JSTOR for published articles, NBER and SSRN for working. As a complementary source, we also use two popular research engines: Google and Yahoo. We do a query in each of these databases with the term “herd behavior” or “herd measurement.” In particular, we retain studies which evaluate the herd behavior using one of the four measures cited above (LSV (1992), CSSD (1995), CSAD (2000), and HS (2004)). In addition, we exclude all papers that do not present detailed results, such as sample size and outcome statistic (e.g., t-student, correlation, χ2 ) that allows the computation of effect (Hunter and Schmidt, 1994; Rosenthal, 1994). Our search returns 56 papers containing 23 studies using the LSV (1992) measure, 14 papers based on Christie and Huang’s (1995) approach, 11 works performed with the CSAD (1999) measure and 8 researches applying the HS (2004) measure (see Appendix 1).
March 15, 2010
832
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
Then, we read all articles of our sample in order to extract relevant data. Specifically, we collected study characteristics, sample sizes, study periods, studied the stock market, and the test statistics. However, some papers apply a herding measure on several samples, i.e., we have multiple cases in a single paper. Our solution consists in taking every sample as an independent research because the observation of meta-analysis is the regression and not the paper. Finally, we record 36, 33, 38, and 31 cases for LSV, CH, CCK and HS measures respectively. 4.2. Methodology Even as the theoretical literature of herd behavior is well developed, the empirical literature has performed only indirect tests of the various herding theories, which stems from an inability to observe the reasons why agents make their decisions. Consequently, tests have been developed that are consistent with the existence of herding. However, the tests are typically necessary, but not sufficient, for herding. Therefore, it is not known to what extent agents accommodate the decisions of others in their decision-making. A major limitation of any test of herding is the inability to separate intentional herding from coincidental decision-making, in which agents may appear to make similar decisions through possessing similar information, while paying no attention to the actions of others. Such an observation would result in correlated decisions, but would not imply the existence of herding. The contradictory results between herding studies cannot be ignored. So, for several reasons, a simple narrative review is not able to provide insights about the best measure we should use according to the data set of any research: — First, the herding literature lacks acceptable rules of inference for going from the findings of studies to overall generalizations about the research literature. Most measures give evidence of herding only within the framework of the study. If we change the sample, we may find contradictory results, especially when we move from developed to emergent markets. For example, applying the CSAD measure introduced by Chang et al. (1999) in emergent markets, provides evidence of herding in South Korean and Taiwan markets, but does not support the existence of herding in the Tunisian stock market (Hachicha et al., 2008). So, the evidence cannot be generalized and any measure is far to be validated; — Second, the validation of herding in the literature review relies on statistical significance for evaluating and comparing studies. Significance is dependent on sample size; so a weak effect can be made to look stronger simply but adding more participants. For example, Farber et al. (2006) used the CH measure to test herding in the Vietnamese stock market using daily data during four years. For a sample of 32 stocks, authors validated the existence of herding. In the same context, Hu (2007) studied herding, and widened the sample to 45 stocks.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 833
The study shows no evidence of herding. Hence, the size effect must be taken into consideration to validate the accuracy of a herding measure; — Third, herding literature reviews are not well suited for analyzing the impact of moderating variables. Authors of narrative reviews rarely reach clear conclusions regarding how methodological variations influence the strength of herding; — Finally, a standard literature review cannot draw valid conclusions because it is based only on published papers which often provide significant findings. Actually, the narrative review of herding is more affected by this final bias, so a good analysis actively seeks unpublished findings. One way to address these problems is to gather all the studies that have been done on a topic and try to assess the results. For this reason, we conduct a metaanalysis of the studies. This quantitative method of combining results has been available since the early 1990s, but is was not until Glass’s paper (1976) that the term meta-analysis was coined. A meta-analysis is a quantitative method of combining the results of independent studies (drawn from the published and non-published literature) and synthesizing summaries and conclusions which may be used to evaluate studies. It aims at gleaning more information from existing data by pooling the results of smaller studies and applying one or more statistical techniques. It is appropriate whenever we have multiple studies which test the same or similar hypotheses, and the joint results of the studies do not clearly indicate the results of the test. There are generally at least seven (87) steps to performing a meta-analysis (Kulinskaya et al., 2008): 1. Decide on the topic. 2. Decide on the hypothesis being tested. 3. Review the literature for all studies which test that hypothesis. While this literature review may begin with a computerized search of the literature, such searches may miss important studies. Therefore other methods, such as careful study of the references in articles, examination of papers, abstracts, and presentations not published, and other sources of unpublished (including government agencies and rejected submissions). This needs to be done carefully to minimize bias. 4. Evaluate each study carefully, to decide whether it is of sufficient quality to be worthy of inclusion, and whether it includes sufficient information to be included. This includes attention to endpoints, choice of the measure of effect size, and other information about quality. This task, too, needs to be done carefully to minimize bias. 5. Create a database containing the information necessary for the analyses. 6. Perform the meta-analysis. 7. Interpret the results.
March 15, 2010
834
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
4.3. Statistical Procedures Knowing a statistical procedure is of great importance in choosing which test is needed and why. In the following, we present several formulae which could be very helpful to understand the calculations used in the procedures. The first step in meta-analysis is the computation of Fisher’s Z by transforming the correlation values (r) according to the approach of Hunter and Schmidt (2004). 1−r Z = 1/2 ln . (12) 1−r Furthermore, the variance of the Z transformed (vi ) is estimated simply as vi =
1 ni − 3
(13)
where ni is the sample size of the study i. Once an effect size is estimated for each study, the next step is to summarize these results from the individual studies to gain an understanding of the overall effect. So, as stated earlier that this analysis allows the aggregation of the results which is not the simple sum of the data obtained from all the studies, but a procedure that “weights” the results of each study according to its precision. ES =
ki=1 wi ∗ zi
ki=1 wi
.
(14)
Here the procedure used in the calculation may be divided into two categories: fixed effects model and random effects model. In the fixed effects model, the true treatment difference is considered to be the same for all studies. The standard error of each one is based on sampling variation within the study. In the random effects model, the true treatment difference in each study is itself assumed to be a realization of a random variable, which usually assumed to be normally distributed. As a consequence, the standard error of each study estimate is increased due to the addition of this between study variance (τ 2 ). Q − (k − 1) 2 i τ 2 = wi − w wi 0
for Q > k − 1 (15) for Q ≤ k − 1
where Q is the heterogeneity statistics and k the number of studies. The major difference between both models is the computation of the weight. Where in the fixed model the weight of each study (wi ) is given by the inverse of the variance, in the random one, the weight of each study (w∗i ) is equal to the
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 835
inverse of the study’s variance plus an estimate of the pooled variance (τ 2 ). wi =
1 vi
w∗i =
1 1 vi
+ τ2
.
The choice between fixed and random effects models is based on the heterogeneity test. When there is no heterogeneity between studies both models lead to the same overall estimate and standard error. As the heterogeneity increases the standard error of the overall estimate from the random effects model increases relative to that from the fixed effects model. To test whether the sample effect sizes are themselves homogenous (from a single population), one uses Q statistics proposed by Cochran (1954) and defined by Hedges and Olkin (1985) as a form of weighted sums of squares. Q = ki=1 wi (zi − ES)1
(16)
If we assume that the conditional within-study variance is known, then under the null hypothesis of homogeneity (H0 : τ 2 = 0), the Q statistic has a chisquare distribution with n − 1 degrees of freedom. Thus, Q values higher than the critical point for a given significance level (α) enable us to reject the null hypothesis and conclude that there is statistical significance between studies and variance. One problem with Q statistic is that its statistical power depends on the number of studies, with power being very low or very high for a small or a large number of studies, respectively. Higgins and Thompson (2002) have recently proposed the I 2 index. This later quantifies the extent of heterogeneity from a collection of effect sizes by comparing the Q value with its expected value assuming homogeneity, that is, with its degree of freedom. Q − (k − 1) × 100% for Q > k − 1 2 Q . I = 0 for Q ≤ k − 1
(17)
The I 2 index can easily be interpreted as a percentage of heterogeneity, that is, the part of total variation that is due to between study variance τ 2 , or to heterogeneity rather than to chance. Generally, we can choose between two models of meta-analysis, the “fixed” and the “random” model. If I 2 > 75% than the heterogeneity is high and we should use a random effect model for meta-analysis. After choosing the adequate model, we can calculate the effect size according to the formula (14). After that, we compute the standard errors (SE), variance (V ),
March 15, 2010
836
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
confidence interval (CI), and the z value of the effect size according to the following equations: The variance of the combined effect is defined as the reciprocal of the sum of the weights, or V =
1
ki=1 wi
.
(18)
The standard error of the combined effect is then the square root of the variance √ (19) SE = V . The 95% confidence interval for the combined effect would be computed as CI = [ES − 1,96 × SE; ES + 1,96 × SE].
(20)
Finally, if one were so inclined, the Z value could be computed using Z − value = ES/SE.
(21)
For a one-tailed test the p value would be given by p − value = 1 − ϕ(Z − value)
(22)
where ϕ(Z) is the standard normal cumulative distribution function. 4.4. Results As detailed in the previous section, we focus on the effect of LSV, CSSD, CSAD, and HS as measures of herd behavior in financial markets. While several previous works have not shown strong evidence of the existence of this phenomenon, here we have two distinct objectives. First, we intend to obtain an estimate of the most accurate measure of this bias starting from the slopes reported in the studies examined. Second, we aim at exploring to what extent the inclusion of several unpublished studies affects the research design of herd behavior. In this chapter we conducted four meta-analyses to select the best measure in detecting herd behavior. So, for each analysis we computed an effect size which is based on the correlation coefficient and gives the relative strength of the association between herding and market return (especially during market stress periods). We explored this link by harvesting from each study the herding parameter (measured differently in studies) and its related t-statistic. 4.4.1. The heterogeneity test To make a choice between fixed and random effect, we must first test the heterogeneity hypothesis. For this reason, we compute both Q statistic and I 2 index. The results are presented in Table 5.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 837
Table 5. Herding measure
K
LSV (1990) CH (1995) CCK (2000) HS (2004)
36 33 38 31
Results of the heterogeneity test. Q 4928.8 903.45 968.9 43.263
p
I 2 index (%)
Effect model
0 0 0 0.0555
99.28 96.45 96.81 30.65
Random Random Random Fixed
From this table appear the large values of Q statistic for the first three metaanalyses (LSV, CH and CCK). With K being the number of studies, the chi-square distribution for LSV, CH and CCK meta-analyses are: 49.80, 46.19 and 52.19, respectively. As Q statistics are clearly superior to critical values, we can reject the null hypothesis of homogeneity. Because of each meta-analyses has a different number of studies k, Q statistics are not comparable. However, the I 2 indices enable us to assess the extent of true heterogeneity as a percentage of total variation. So, for the three first metaanalyses, their respective Q values only inform about the existence of heterogeneity, whereas the I 2 values allow us to identify the LSV (1990) meta-analysis as showing the largest heterogeneity (I 2 = 99.28%) in the comparison with the other two. According to the value of I 2 superior to 75%, we can state that three meta-analyses present a high magnitude of heterogeneity. In other words, more than 96% of the total variation in a set effect sizes is due to through heterogeneity, which is between study and variance. Another scheme is reflected in the last meta-analysis. The Q value is quit inferior to the critical value of 43.77, which allow us to reject the null hypothesis of heterogeneity. So, we conclude that there is no statistical significance between studies and variation. I 2 index indicates that only 30% of the total variability is due to between study and variance which reflects a low heterogeneity. The above results, allow us to apply the random effects model for the LSV, CH and CCK meta-analyses and the fixed effect model for the HS (2004) meta-analyses. 4.4.2. Forest plots A key element in any meta-analysis is the forest plot, which serves as the visual representation of the data (Lewis and Clarke, 2001). In this plot, each study as well as the combined effect is depicted as a point estimate bounded by its confidence interval. The plot, as suggested by its appellation, allows the researcher to see both the forest and the trees (Gioacchino, 2005). It shows if the overall effect is based on many studies or a few; on studies that are precise or imprecise; whether the effects for all studies tend to line up in a row, or whether they vary substantially from one study to the next. The plot puts a face on the statistics, helping to ensure that they will be interpreted properly, and highlighting anomalies such as outliers that require attention (Whitehead, 2002).
March 15, 2010
838
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
In Appendix 2, the size effect is represented by Fisher’s Z in the vertical axis. The x axis represents the studies contained in the four meta-analyses. For each study there is a symbol marking the point estimate and a horizontal line joining the lower and upper limits of the 95% confidence interval. This type of display is good at providing information on the magnitude of each study estimate and its precision. The relative precision of two study estimates can be seen by comparison of the widths of their confidence intervals (CIs).The vertical lines (whiskers) through the circles depict the length of the confidence intervals (CIs). The longer the lines, the wider the CIs, the less precise the study results. Arrows indicate that the CI is wider than there is space in the graph. In general, the bigger the sample size and the narrower the CI, the greater the weight of the study. 4.4.3. Summary statistics In this section we compute pooled statistics for each meta-analysis, including the size effect, standard error, variance, 95% confidence interval and the z value. Results are summarized in Table 6. We calculate pooled estimates of the LSV, CH and CCK herding statistics for samples of 36, 33, and 38 regressions respectively under the random effects model. We record a medium-size effect for the LSV measure and a small one for the two other measures. The calculation of the random effect needs at first the computation of the fixed one. The LSV measure has the smaller one (Fixed LSV = 0.124; Fixed CH = 0.209; Fixed CCK = 0.149) because of its high heterogeneity which imply 2 2 = 0.047; τ 2 = 0.113; τCH a high between study and variance (τLSV CCK = 0.045). We remark that the random effect is higher than the fixed, which can be explained by the fact that between study variance is not null. The assumption of the fixed effect method that there is one population effect size is rather unrealistic given that we are combining estimates of studies with widely varying characteristics, and the measure of herding is an average across different sets of countries and regions. For the HS measure we calculate only the fixed effect because of the homogeneity of the 31 studies. We find a very small value compared to the other meta-analysis. This reflects the weak evidence of this measure. The same remark appears from the reading of standard error column, where the HS effect size has a smaller value, on the other hand the LSV measure present a larger one. Even for standard error, Table 6. Herding measure LSV (1990) CH (1995) CCK (2000) HS (2004) a Significant at 5%.
Summary statistics.
ES
SE
V
CI
Z
0.765 0.339 0.278 0.055
0.062 0.041 0.040 0.007
0.004 0.002 0.002 0.000049
[0.643; 0.888] [0.257; 0.420] [0.200; 0.356] [0.042; 0.068]
12.276a 8.189a 7.002a 8.227a
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 839
the CH and CCK measures still present similar results which can be explained by a great semblance in the formulation of these measures. The same scheme is replayed for the lengths of confidence interval; the LSV meta-analysis has a larger one which means that the size effect is the more accurate one. The table reports also the z score for the weighted mean effect sizes. This score is defined as the rapport between the effect size and its standard error. Here, we remark that the HS measure has a similar score as the CH and CCK measures. So, we conclude that when between studies and variance is small, then the random model overestimates the standard error and underestimates the z score associated with the mean (Hedges and Vevea, 1998). Only for the LSV meta-analysis, we also record that three studies present larger effect sizes regarding to others. These studies are: a published study of Walter and Weber (2006) and the two unpublished works of Ohler and Chao (2000) and Lobao and Serra (2002). These three studies find evidence of herd behavior using the LSV measure with a higher significance. By omitting these studies, the random effect size loses 36% from its value and reduces to 0.49. This statement raises the issue of publication bias. 4.4.4. Publication bias A crucial issue in meta-analysis is whether the meta-sample is subject to publication bias, either because of self-censoring by authors or because editors of journals make publication decisions partly on the basis of significance levels of the main effect being studied. Publication bias is defined as “a bias against negative findings on the part of those involved in deciding whether to publish a study” (Soeken and Sripusanapan, 2003). One of the advantages of meta-analysis over a conventional literature review is that the quantitative nature of meta-analysis allows testing and correcting for the occurrence of publication bias. Various tests have been developed and, although some of them have been shown not to be overly powerful in detecting publication bias, we proceed by using the so-called “funnel plot” (Macaskill et al., 2001). A funnel plot is a graphical presentation introduced by Light and Pillemer (1984) in order to detect the publication bias. In this graph all studies are plotted on x and y axis. The x axis represents the size effect while the y axis shows the precision. This later is presented by standard errors according to Egger et al. (1997) and Sterne et al. (2005). Standard error is recommended because smaller studies receive more emphasis which is where bias is more likely to be found. Figure 6 shows funnel plots for the four meta-analyses. The funnel plots are asymmetric with different degrees. Only studies included in the LSV funnel plot seem to be distributed quite symmetrically around the axis represented by the pooled effect. We also record that smallest studies are missing in the right bottom corner. In the HS funnel, we see that large studies appear toward the top of the graph, and tend to cluster near the mean effect. In both HS and CH graphs, we record a
March 15, 2010
840
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
Figure 6. Represents funnel plots. Standard errors on the vertical axis are plotted against their effect sizes on the horizontal axis. Rhombuses indicate studies, while the black line indicates the predicted size effect.
higher concentration in the right side of the mean that the other. It also appears in the four graphs that the smaller, less precise studies are all much more positive than the larger, more precise studies, and there are no smaller studies to the left (negative) side of the graph. This appears to be a good example of publication bias. So, smaller studies are likely to be published only if they meet the criteria of statistical significance. Egger et al. (1997) suggest a test of funnel asymmetry in which the standardized effect size is regressed against the standard error, and hence the constant being significantly different from zero provides evidence for publication bias. The estimated constants for our meta-analysis are regrouped in Table 7. Table 7.
Constant t-student
Egger et al. (1997) regression result. LSV
CH
CCK
HS
−0.06219 −0.25058
4.93520 4.200817a
5.26722 3.504707a
1.989 7.21197a
a Indicates statistical significance at 5% revel.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 841
Table 7 shows that the constants of CH, CCK and HS meta-analysis are positive and statistically significant. So, we conclude the existence of publication bias towards obtaining results indicating a positive correlation between herd behavior and market returns. As showed by the funnel plot, the LSV meta-analysis is not strongly subject of publication bias. The evidence shown by the test and the funnel should, however, be interpreted with caution because it rests on a simple bi-variants analysis and the effects may also be caused by other biases (Egger et al., 1997; Sterne et al., 2001). The cumulative meta-analysis (CMA) can be used to assess the effect of publication bias or small-study effect. This can be done by sorting the studies from most precise to least precise. When performing CMA we want to see if the point estimate changes (and in what direction) when smaller studies are added (Monroe, 2007). A cumulative meta-analysis is a meta-analysis run with one study, and then repeated with a second study added, then a third, and so on. Similarly, in a cumulative forest plot, the first row shows the effect based on one study, the second row shows the cumulative effect based on two studies, and so on. So, we have sorted in the sequence of largest to smallest (or most precise to least precise), and we perform a cumulative meta-analysis with the addition of each study. If the point estimate has stabilized with the inclusion of the larger studies and does not shift with the addition of smaller studies, then there is no reason to assume that the inclusion of smaller studies had injected a bias (i.e., since it is the smaller studies in which study selection is likely to be greatest) (Borenstein, 2005). The results of the CMA are represented in Appendix 3. According to these plots, we conclude that the addition of small studies shifts the point estimate up. This could be due to the publication bias and reflects the fact that these studies used a different population or protocol than the larger ones. Because of the existence of the publication bias, we aim in what follows to correct our meta-analysis for this bias. The method used in our study is the failsafe N. Rosenthal suggests that we could compute the number of missing studies (with mean effect of zero) that would need to be added to the analysis before the combined effect would no longer be statistically significant (Borenstein, 2005). Rosenthal referred to this as a “File drawer” analysis (this being the presumed location of the missing studies), and Harris Cooper (1979) suggested that the number of missing studies needed to nullify the effect should be called the “failsafe N” (Rosenthal, 1979; Begg, 1994). Rosenthal develops the following formula to enable meta-analysts to calculate the number of zero-effect studies that would be required to nullify the effect: ( Zi )2 −k (23) 2.706 √ where Nfs is the failsafe number; Zi = Zri ∗ ni − 3, Zri is Fisher’s Z-transformed correlation coefficient for the relationship between the two variables of interest for Nfs =
March 15, 2010
842
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
Table 8. The failsafe number estimation.
Nfs M
LSV
CH
CCK
HS
42687 190
12370 175
14495 200
1105 165
study I. The value 2.706 is based on a one-tailed P value of 0.05; and k is the number of studies. Rosenthal’s method is based on the test of combined significance (the sum of Zs). Given a sum of Zs for the studies in the meta-analysis that is statistically significant (in other words if Z is larger than the critical value for significance), Rosenthal’s test computes the number of additional studies with Z values averaging zero that would be required to reduce the overall Z to a value lower than the critical value (Rothstein, 2008). Table 8 reports the failsafe number for each meta-analysis. There are no objective criteria by which to judge when the Nfs is large enough to ensure confidence in the validity of the results; however, Rosenthal suggests that a failsafe greater than or equal to five times the number of comparisons plus 10 (M = 5 × k + 10) means that the results could be considered “robust” (Moller and Jennions, 2001). From Table 8 we record that the failsafe numbers in the four meta-analyses are significantly larger than the m value. This result means that it is unlikely that the publication bias can alter the main conclusion of a meta-analysis regarding the significance of an effect. 4.5. Discussion From the meta-analyses you can extract several implications, concerning herd behavior and its measure on the one hand and dealing with investors and managers on the other hand, which may affect the detection of herding bias. • Most of the studies examining the empirical evidence on herding and its effects have been done in the context of developed countries. In these countries, the evidence suggests that investment managers do not exhibit significant herd behavior and that the tendency to herd is highly correlated with a manager’s tendency to pursue momentum investment strategies. Whether such positive feedback or momentum strategies are efficient depends on how fast new information is incorporated into market prices. • In emerging markets where, as the evidence suggests, one is likely to find a greater tendency to herd. In these markets, where the environment is relatively opaque because of weak reporting requirements, lower accounting standards,
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 843
•
• •
•
• •
•
•
lax enforcement of regulations, and costly information acquisition, information cascades, and reputational herding are more likely to arise. Also, because information is likely to be revealed and absorbed more slowly, momentum investment strategies could be potentially more profitable. The statistical measures used in empirical studies need to be further refined to distinguish true herd behavior from the reactions of participants to public announcements or commonly available information. It should be emphasized that “adjusting for changes in fundamentals” is easier said than done and that it is difficult to adequately capture both the direction and intensity of herding in a particular security or market. Anonymity is important for the existence, functioning, and liquidity of markets and it may not be appropriate to require the managers to reveal proprietary information on their investment strategies. There is always an information asymmetry between any borrower and lender, and some element of an agency problem when owners of funds delegate investment decisions to professional managers. Therefore, there will always be some possibility of informational cascades and of reputation and compensation-based herding. While most studies make use of daily or monthly data to detect herding, few studies have employed the use of intraday data in examining the relationship between dispersions and equity market returns. However, herding can be seen intuitively as an intraday phenomenon. When news is released to the market, traders unsure of what to expect may turn to each other. At intraday levels, traders will not have time to consult complex models to predict future price movements and therefore their decisions may not be compatible with rational thinking (Orlean, 1998). Past empirical studies of herd behavior consider herding among specific market participants; however, few studies consider herding within the aggregate equity market. It seems to us conceptually difficult if not impossible to rigorously define a statistic which could provide an absolute measure of herding. However, we note that most proposed herd measures, such as CSAD, PCM, CH and Chang et al. (2000), have apparently tried to identify herding in absolute terms. We hypothesize that studies do not find herding because the interval they consider is too long, ranging from daily to quarterly. Trading in developed markets is vigorous and continuous throughout the trading day. Traders on a trading floor can see what one another buy and sell; experienced electronic screen traders recognize one another’s trading (Radalj and McAleer, 1993). Specialized industry sectors are another neglected setting in which herding may occur. Traders in these areas know one another and have formed opinions about one another’s abilities. In periods of uncertainty, it is natural to
March 15, 2010
14:46
844
• • •
•
•
•
•
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
follow the more experienced trader or the one whose strategy worked the last time. The LSV measure limits the ability to differentiate between herding and a rational response of investors to publicly available information, thus failing to account for changes in fundamentals. Taking only the number of investors active and disregarding the value of stocks they trade in the LSV measure threatens to omit herding which can in fact be present. In applying the CSAD measure, the choice of investment category i and the time interval t over which trading data are observed is very important. For example, IMF managers might not observe, either instantaneously or with short lags, holdings of other managers at the level of individual stocks. The evidence provided by Shiller and Pound (1989) is mixed. If, indeed, holdings of other investment entities can only be observed with a (considerable) lag, then intentional herding cannot arise because what cannot be observed cannot be imitated. Managers may be able to observe actions at more aggregate level stocks in specific industries, sectors, or countries. Therefore, there may be a better chance of detecting herding at this level. According to Bikhchandani and Sharma (2000), a serious limitation of the LSV method is the ability to observe the portfolios of other managers may be restricted due to reporting requirements, so that fund managers would find it difficult to determine the portfolio decisions of others in a timely manner. Christie and Huang’s test looks for evidence of a particular form of herding and that too only in the asset-specific component of returns. It does not allow for other forms of herding that may show up in the common component of returns, for example, when prices of all assets in a class (or market or country) change in the same direction. The Christie and Huang test should, therefore, be regarded as a gauge of a particular form of herding and the absence of evidence against this form of herding should, therefore, not be construed as showing that other types of herding do not exist. Another problem with using the simple CSSD of individual stock returns as in CH is that it is not independent of time series volatility. CCK note that the CH approach is a more stringent test, which requires “a far greater magnitude of nonlinearity” in order to find the evidence of herding. According to HS’s approach, the CSSD of individual stock returns presents some failing points. The first gap is relative to the fact that during periods of market stress rational asset pricing would imply positive coefficients on the two dummy variables, while herd behavior would suggest negative coefficients. However, market stress does not necessary imply that the market as a whole should show either large negative or positive returns. The introduction of dummy variables is itself crude since the choice of what is meant by “extreme” is entirely subjective. Moreover, since the method does not include any device to control for movements
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 845
in fundamentals it is impossible to conclude whether it is herding or independent adjustment to fundamentals that is taking place and therefore whether or not the market is moving towards a relatively efficient or an inefficient outcome. • Literature related to managerial performance indicates that when evaluation occurs relative to the industry average, managers will seek to engage in decision making similar to others in the industry (Zwiebel, 1995). The incentive to do so may be to mask low ability through mimicking the decisions of higher ability managers. Compensation contracts for fund managers may also encourage herd behavior (Maug and Naik, 1995). Devenow and Welch (1996) also imply that frenzies of takeover activity, as well as dividend policy and the rush to adaptation of new technologies in some industries, may be linked to managerial decision herding. 5. Conclusion Contributing to the herding literature which is centered on individual investors, this study not only documents substantial herding among individual investors, but also establishes strong findings by applying the most important measures of herd behavior. On a technical note, the findings illustrate the importance of distinguishing between the different measures that can provide contradictory results. So, this chapter addresses the issue of herd behavior in developed markets. We use a sample of 60 stocks from the Toronto stock market that constitute the TSX60 index. Our data is constituted from monthly stock returns from January 2000 to December 2006. In order to investigate the presence of herding, we apply four measures: Lakonishok et al. (1992), Christie and Huang (1995), Chang et al. (2000), and Hwang and Salmon (2004). Our finding suggests that the use of CH (1995) and CCK (2000) give no evidence of herd behavior, which means that the prediction of rational asset pricing has not been violated. On the other hand, the employment of both LSV (1992) and HS (2004) gives strong proof of the herding pattern among the Toronto stock market. From this study, we record contradicting results that do not enable us to affirm or reject the herding hypothesis in the TSX. This result is common in the herding literature, which lacks acceptable rules of inference for going from the findings of studies to overall generalizations. To overcome these drawbacks, we apply a meta-analysis to 56 studies that detect herd behavior using several measures. This analysis shows in one hand that the LSV measure reveals to be the powerful one in detecting true herd behavior, while the CH and CCK measures give similar evidence. Although these are important results, our study is very restricted, because the empirical evidence is applied on a developed market in one hand, and the small number of studies included in the meta-analysis on the other. So, this work can be expanded in several dimensions, by applying these measures and others on markets with different microstructure, and doing the
March 15, 2010
846
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
meta-analysis on subgroups, i.e., dividing studies according to countries and data frequency. References Asch, SE (1952). Social Psychology. Englewood Cliffs, New Jersey: Prentice Hall. Avery, C and P Zemsky (1998). Multidimensional uncertainty and herd behavior in financial markets. American Economic Review, 88, 724–748. Banerjee, A (1992). A simple model of herd behavior. Quarterly Journal of Economics, 107, 797–818. Barberis, N and A Shleifer (2003). Style investing. Journal of Financial Economics, 68, 161–199. Begg, CB (1994). Publication bias. In The Handbook of Research Synthesis, Cooper, H and L Hedges (eds.), 399–409. New York: Russell Sage. Bikhchandani, S, D Hirshleifer and I Welch (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100, 992–1026. Borenstein, M (2005). Software for publication bias. In Publication Bias in Meta-Analysis: Prevention, Assessment and Adjustments, Rothstein, H, AJ Sutton and M Borenstein (eds.), 193–220. Chichester, UK: Wiley. Bowe, M and D Domuta (2004). Investor herding during financial crisis: A clinical study of the Jakarta stock exchange. Pacific-Basin Finance Journal, 12, 387–418. Brennan, M (1993). Agency and asset prices. Finance Working Paper No. 6–93, UCLA. Chang, EC, JW Cheng and A Khorana (2000). An examination of herd behavior in equity markets: An international perspective. Journal of Banking and Finance, 24, 1651–1679. Christie, WG and RD Huang (1995). Following the pied piper: Do individual returns herd around the market. Financial Analysts Journal, 51(4), 31–37. Cochran, WG (1954). The combination of estimates from different experiments. Biometrics, 10, 101–129. Coval, JD and TJ Moskowitz (1999). Home bias at home: Local equity preference in domestic portfolios. Journal of Finance, 54, 2045–2073. Demirer, R and AM Kutan (2006). Does herd behavior exist in Chinese stock market? Journal of International Financial Markets, Institutions and Money, 16, 123–142. Devenow, A and I Welch (1996). Rational herding in financial economics. European Economic Review, 40, 603–615. Egger M, G Smith, M Schneider and C Minder (1997). Bias in meta-analysis detected by a simple, graphical test. British Medical Journal, 315, 629–634. Egger, M, GD Smith and DG Altman (eds.) (2001). Systematic Reviews in Health Care: Meta-analysis in Context, 2nd ed. London: British Medical Journal Publishing Group. Farber, A, NV Nam and Q Hoang (2006). Policy Impacts on Vietnam Stock Market: A Case ofAnomalies and Disequilibria 2000–2006, Working Paper CEB 06-005. RS, Universite Libre de Bruxelles, Solvay Brussels School of Economics and Management, Centre Emile Bernheim (CEB). Friedman, BM (1984). A comment: stock prices and social dynamics. Brookings Papers on Economic Activity, 2, 504–508.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 847
Froot, KA, DS Scharfstein and JC Stein (1992). Herd on the street: Informational inefficiencies in a market with short-term speculation. Journal of Finance, 47, 1461–1484. Glass, GV (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5, 3–8. Gleason, CA and CMC Lee (2003). Analyst forecast revisions and market price discovery. The Accounting Review, 78, 193–225. Green, LW and MW Kreuter (1999). Health Promotion Planning: An Educational and Ecological Approach. Mountainview, California: Mayfield. Grinblatt, M and M Keloharju (2000). The investment behavior and performance of various investor types: A study of Finland’s unique data set. Journal of Financial Economics, 55, 43–67. Grinblatt, M, S Titman and R Wermers (1995). Momentum investment strategies, portfolio performance and herding: A study of mutual fund behaviour. American Economic Review, 85, 1088–1105. Hachicha, N, A Bouri and H Chakroun (2008). The herding behaviour and the measurement problems: Proposition of dynamic measure. International Review of Business Research Papers, 4(1), 160–177. Hedges, LV and JL Vevea (1998). Fixed- and random-effects models in meta-analysis. Psychological Methods, 3, 486–504. Higgins, JPT and SG Thompson (2002). Quantifying heterogeneity in a meta-analysis. Statistis in Medicine, 21, 1539–1558. Hirshleifer, D, A Subrahmanyam and S Titman (1994). Security analysis and trading patterns when some investors receive information before others. Journal of Finance, 49, 1665–1698. Hong, H, T Lim and JC Stein (2000). Bad news travels slowly: Size, analyst coverage, and the profitability of momentum strategies. Journal of Finance, 55, 265–295. Hunter, JE and FL Schmidt (2000). Fixed effects vs random effects meta-analysis models: Implications for cumulative research knowledge. International Journal of Selection & Assessment, 8, 275–292. Hunter, JE and FL Schmidt (2004). Methods of Meta-Analysis: Correcting Error and Bias in Research Findings, 2nd ed. Newbury Park, CA: Sage. Hwang, S and M Salmon (2001). A new measure of herding and empirical evidence. A Working Paper. United Kingdom: Cass Business School. Hwang, S and M Salmon (2004). Market stress and herding. Journal of Empirical Finance, 11(4), 585–616. Jost, JT (1995). Negative illusions: Conceptual clarification and psychological evidence concerning false consciousness. Political Psychology, 16, 397–424. Kim, W and S-J Wei (2002a). Offshore investment funds: Monsters in emerging markets? Journal of Development Economics, 68(1), 205–224. Kim, W and S-J Wei (2002b). Foreign portfolio investors before and during a crisis. Journal of International Economics, 56(1), 77–96. Kulinskaya E, S Morgenthaler and GR Staudte (2008). Meta-Analysis: A Guide to Calibrating and Combining Statistical Evidence. England: John Wiley and Sons Ltd. Lakonishok, J, A Shleifer and R Vishny (1992). The impact of institutional and individual trading on stock prices. Journal of Financial Economics, 32, 23–43.
March 15, 2010
848
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
Light, R and D Pillemer (1984). Summing Up: The Science of Reviewing Research. Cambridge, Massachussets: Harvard University Press. Lobao, J and AP Serra (2006). Herding Behaviour: Evidence from Portuguese Mutual Funds, in Mutual Funds: An International Perspective, Gregoriou, GN (ed.). USA: John Wiley and Sons. Maug, E and N Naik (1996). Herding and delegated portfolio management. Mimeo, London Business School. Nofsinger, JR and RW Sias (1999). Herding and feedback trading by institutional investors. Journal of Finance, 54, 2263–2316. Rajan, RG (1994). Why credit policies fluctuate: A theory and some evidence. Quarterly Journal of Economics, 436, 399–442. Rosenthal, R (1991). Meta-Analytic Procedures for Social Research. Revised Ed. Newbury Park: Sage. Rosenthal, R (1994). Parametric measures of effect size. In The Handbook of Research Synthesis, Cooper, H and LV Hedges (eds.), pp. 231–244. New York: Russell Sage Foundation. Rothstein, HR (2008). Publication bias as a threat to the validity of meta-analytic results. Journal of Experimental, Criminology, 4, 61–81. Rothstein HR,AJ Sutton and M Borenstein (eds.) (2005). Publication Bias in Meta-Analysis. Chichester: Wiley. Scharfstein, DS and JC Stein (1990). Herd behavior and investment. American Economic Review, 80(3), 465–479. Shiller, RJ and J Pound (1989). Survey evidence on diffusion of interest and information among investors. Journal of Economic Behavior & Organization, 12(1), 47–66. Sterne et al. (2005). Funnel plots for detecting bias in meta-analysis: Guidelines on choice of axis. Journal of Clinical Epidemiology, 3, 26–34. Tan, L, TC Chiang, J Mason and E Nelling (2008). Herd behavior in Chinese stock markets: An examination of A and B shares. Pacific-Basin Finance Journal, 16, 61–77. Trueman, B (1994). Analyst forecasts and herding behaviour. Review of Financial Studies, 7, 97–124. Tvede, II (1999). The Psychology of Finance. 2nd ed. New York: Wiley. Welch, I (1992). Sequential sales, learning and cascades. Journal of Finance, 47, 695–732. Wermers, R (1999). Mutual fund herding and the impact on stock prices. Journal of Finance, 43, 639–656. Whitehead, A, RZ Omar, JPT Higgins, E Savaluny, RM Turner and SG Thompson (2001). Meta-analysis of ordinal outcomes using individual patient data. Statistics in Medicine, 20(5), 2243–2260.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 849
Appendix 1 LSV studies
CH studies
Lakonishok et al. (1992) Wermers (1999) Kim and Wei (1999) Ohler and Chao (2000) Borensztein and Gelos (2001) Lobao and Serra (2002) Feng and Seasholes (2002) Guo and Shih (2003) Fong et al. (2004) Bowe and Domuta (2004) Ghysels and Seon (2005) Wylie (2005) Weu et al. (2005) Walter and Weber (2006) Sharma et al. (2006) Alemanni and Ornelas (2006) Fong et al. (2007) Frey et al. (2007) Lai et al. (2007) Uchida and Nakagawa (2007) Haicheng and Rui (2007) Puckett and Yan (2007) HS studies Hwang and Salmon (2004) Kallinterakis and Kratunova (2006) Wang and Canela (2006) Gavriilidis et al. (2007) Zhou (2007) Kallinterakis (2007) Andronikidi and Kalluitenakis (2007)
Christie and Huang (1995) Demirer and Lieu (2001) Hwang and Salmon (2001) Henker et al. (2003) Guo and Shih (2003) Chen et al. (2004) Gleson et al. (2004) Lai and Lau (2004) Dermirer and Kutau (2006) Farber et al. (2006) Demirer et al. (2007) Ha (2007) Chiang and Zheng (2008) Hachicha et al. (2008) Natividad et al. (2008) CCK studies Chang et al. (1999) Kuhn and Hofstetter (2001) Goodfellow et al. (2001) Henker et al. (2003) Gleason et al. (2004) Demirer et al. (2007) Tan et al. (2007) Cajueiro and Tabak (2007) Chiang and Zheng (2008) Hachicha et al. (2008) Soastmoimen (2008)
March 15, 2010
850
14:46
WSPC/Trim Size: 9.75in x 6.5in
N. Hachicha et al.
Appendix 2
SPI-b778
b778-ch34
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
Herding Does Not Exist or Just a Measurement Problem? 851
Appendix 3
March 15, 2010
852
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch34
N. Hachicha et al.
Biographical Notes Nizar Hachicha, has a PhD in finance and accounting and is an Assistant Professor at the University of Economics and Management of Sfax. He is a reviewer Quarterly Journal of Finance for the and Accounting and occasionally a reviewer in Journal of Banking and Finance. AminaAmirat, has a PhD in finance and accounting from the University Economics and Management of Sfax (Tunisia). Member of the COFFIT research unit and reviewer in the International Academy of Business and Economics. Abdelfettah Bouri, Professor in finance and accounting and is Dean of the Faculty of Economics and Management of Sfax and President of the Corporate Finance and Financial Theory (COFFIT) research unit. He is the author of several papers and books.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Chapter 35
Object-Oriented Metacomputing with Exertions MICHAEL SOBOLEWSKI Department of Computer Science, Texas Tech University, P.O. Box 43104, Lubbock, TX 79409, USA [email protected]
This chapter investigates service-oriented computing in the context of object-oriented distributed platforms. A platform consists of virtual compute resources, a programming environment allowing for the development of distributed applications, and an operating system to run user programs and to make solving complex user problems easier. Service protocoloriented architectures are contrasted with service object-oriented architectures (SOOA), and then a metacompute grid based on an SOOA is described and analyzed. A new objectoriented network programming methodology is presented in this chapter. It uses the intuitive metacomputing semantics and the new Triple Command design pattern. The pattern defines how service objects communicate by sending one another a form of service messages called exertions that encapsulate the triplet: data, operations, and control strategy. Keywords: Federated distributed systems; service-oriented architectures; grid computing; metacomputing; metaprogramming; exertion-oriented programming.
1. Introduction The term “grid computing” originated in the early 1990s as a metaphor for accessing computer power as easy as an electric power grid. Today, there are many definitions of grid computing with a varying focus on architectures, resource management and access, virtualization, provisioning, and sharing between heterogeneous compute domains. Thus, diverse compute resources across different administrative domains form a grid for the shared and coordinated use of resources in dynamic, distributed, and virtual computing organizations (Foster et al., 2001). Therefore, the grid requires a platform that describes some sort of framework to allow software to run utilizing virtual organizations. These organizations are dynamic subsets of departmental grids, enterprise grids, and global grids, which allow programs to use shared resources — collaborative federations. Different platforms of grids can be distinguished along with corresponding types of virtual federations. However, to make any grid-based computing possible, computational modules have to be defined in terms of platform data, operations, and relevant control strategies. For a grid program, the control strategy is a plan for achieving the desired results by applying the platform operations to 853
March 15, 2010
854
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
the data in the required sequence and by leveraging the dynamically federating resources. We can distinguish three generic grid platforms, which are described below. Programmers use abstractions all the time. The source code written in programming language is an abstraction of the machine language. From machine language to object-oriented programming, layers of abstractions have accumulated like geological strata. Every generation of programmers uses its era’s programming languages and tools to build programs of next generation. Each programming language reflects a relevant abstraction, and usually the type and quality of the abstraction implies the complexity of problems we are able to solve. Procedural languages provide an abstraction of an underlying machine language. An executable file represents a computing component whose content is meant to be interpreted as a program by the underlying native processor. A request can be submitted to a grid resource broker to execute a machine code in a particular way, e.g., by parallelizing and collocating it dynamically to the right processors in the grid. That can be done, for example, with the Nimrod-G grid resource broker scheduler (Nimrod, 2008) or the Condor-G high-throughput scheduler (Thain et al., 2003). Both rely on Globus/GRAM (Grid Resource Allocation and Management) protocol (Foster et al., 2001). In this type of grid, called a compute grid, executable files are moved around the grid to form virtual federations of required processors. This approach is reminiscent of batch processing in the era when operating systems were not yet developed. A series of programs (“jobs”) is executed on a computer without human interaction or the possibility to view any results before the execution is complete. A grid programming language is the abstraction of hierarchically organized networked processors running a grid computing program — metaprogram — that makes decisions about component programs such as when and how to run them. Nowadays, the same computing abstraction is usually applied to the program executing on a single computer as to the metaprogram executing in the grid of computers, even though the executing environments are structurally completely different. Most grid programs are still written using compiled languages such as FORTRAN, C, C++, Java, and interpreted languages such as Perl and Python the way it usually works on a single host. The current trend is still to have these programs and scripts define grid computational modules. Thus, most grid computing modules are developed using the same abstractions and, in principle, run the same way on the grid as on a single processor. There is presently no grid programming methodologies to deploy a metaprogram that will dynamically federate all needed resources in the grid according to a control strategy using a kind of grid algorithmic logic. Applying the same programming abstractions to the grid as to a single computer does not foster transitioning from the current phase of early grid adopters to public recognition and then to mass adoption phases. The reality at present is that grid resources are still very difficult for most users to access, and that detailed programming must be carried out by the user
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
855
through command line and script execution to carefully tailor jobs on each end to the resources on which they will run or for the data structure that they will access. This produces frustration on the part of the user, delays in adoption of grid techniques, and a multiplicity of specialized “grid-aware” tools that are not, in fact, aware of each other that defeat the basic purpose of the compute grid. Instead of moving executable files around the grid, we can autonomically provision the corresponding computational components as uniform services on the grid. All grid services can be interpreted as instructions (meta-instructions) of the metacompute grid. Now, we can submit a metaprogram in terms of metainstructions to the grid platform that manages a dynamic federation of service providers and related resources, and enables the metaprogram to interact with the service providers according to the metaprogram control strategy. We can distinguish three types of grids depending on the nature of computational components: compute grids (cGrids), metacompute grids (mcGrids), and the hybrid of the previous two — intergrids (iGrids). Note that a cGrid is a virtual federation of processors (roughly CPUs) that execute submitted executable codes with the help of a grid resource broker. However, an mcGrid is a federation of service providers managed by the mcGrid operating system. Thus, the latter approach requires a metaprogramming methodology while in the former case the conventional procedural programming languages are used. The hybrid of both cGrid and mcGrid abstractions allows for an iGrid to execute both programs and metaprograms as depicted in Fig. 1, where platform layers P1, P2, and
Figure 1. Three types of grids: compute grid, metacompute grid, and intergrid. A cybernode provides a lightweight dynamic virtual processor, turning heterogeneous compute resources into homogeneous services available to the metacomputing OS (Project Rio, 2008).
March 15, 2010
856
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
P3 correspond to resources, resource management, and programming environment correspondingly. One of the first mcGrids was developed under the sponsorship of the National Institute for Standards and Technology (NIST) — the Federated Intelligent Product Environment (FIPER) (FIPER, 2008; R¨ohl et al., 2000; Sobolewski, 2002). The goal of FIPER is to form a federation of distributed services that provide engineering data, applications, and tools on a network. A highly flexible software architecture had been developed (1999–2003), in which engineering tools like computeraided design (CAD), computer-aided engineering (CAE), product data management (PDM), optimization, cost modeling, etc., act as federating service providers and service requestors. The Service-ORiented Computing EnviRonment (SORCER) (Sobolewski, 2007, 2008a; SORCER, 2008) builds on the top of FIPER to introduce a metacomputing operating system with all basic services necessary, including a federated file system, to support service-oriented metaprogramming. It provides an integrated solution for complex metacomputing applications. The SORCER metacomputing environment adds an entirely new layer of abstraction to the practice of grid computing — EO programming (Sobolewski, 2008b). The EO programming makes a positive difference in service-oriented programming primarily through a new metaprogramming abstraction as experienced in many service-oriented computing projects including systems deployed at GE Global Research Center, GE Aviation, Air Force Research Lab, and SORCER Lab (Burton et al., 2002; Kolonay et al., 2002, 2007; Sampath et al., 2002; Kao et al., 2003; Lapinski and Sobolewski, 2003; Khurana et al., 2005; Sobolewski and Kolonay, 2006; Bergar and Sobolewski, 2007; Turner and Sobolewski, 2007; Goel et al., 2005, 2007, 2008). This paper is organized as follows. Section 2 provides a brief description of two service-oriented architectures used in grid computing with a related discussion of distribution transparency; Sec. 3 describes the SORCER metacomputing philosophy and its mcGrid; Sec. 4 describes the EO programming, and Sec. 5 the federated method invocation (FMI); Sec. 6 provides concluding remarks. 2. SPOA versus SOOA Various definitions of a Service-Oriented Architecture (SOA) leave a lot of room for interpretation. Nowadays, SOA is the leading architectural approach to most grid developments. In general terms, SOA is a software architecture consisting of loosely coupled software services integrated into a distributed computing system by means of service-oriented programming. Service providers in the SOA environment are made available as independent service components that can be accessed without prior knowledge of their underlying platform or implementation. While the client–server architecture separates a client from a server, SOA introduces a third component, a service registry. In SOA, the client is called a service requestor and the server as a service provider. The provider is responsible for deploying a service on
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
857
the network, publishing its service to one or more registries, and allowing requestors to bind and execute the requested service. Providers advertise their availability on the network; registries intercept these announcements and add published services. The requestor looks up a service by sending queries to registries and making selections from the available services. Queries generally contain search criteria related to the service name/type and quality of service. Registries facilitate searching by storing the service representation (description or proxies) and making it available to requestors. Providers and requestors can use discovery and join protocols to locate registries dynamically and then publish or acquire services on the network respectively. Thus service-oriented programming is focused on the development and execution of distributed programs in terms of services that are available via network registries. We can distinguish the service object-oriented architecture (SOOA), where providers, requestors, and proxies are network objects, from the service protocoloriented architecture (SPOA), where a communication protocol is fixed and known beforehand to the both provider and requestor. Using SPOA, a requestor can use this fixed protocol and a service description obtained from a service registry to create a proxy for binding to the service provider and for remote communication over the fixed protocol. In SPOA a service is usually identified by a name. If a service provider registers its service description by name, the requestors have to know the name of the service beforehand. In SOOA (see Fig. 2), a proxy — an object implementing the same service interfaces as its service provider — is registered with the registries and it is always
Figure 2.
Service object-oriented architecture.
March 15, 2010
858
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
ready for use by requestors. Thus, the service provider publishes the proxy as the active surrogate object with a codebase annotation, for example URLs in Jini ERI (Package net.jini.jeri, 2008) to the code defining proxies behavior. In SPOA by contrast, a passive service description is registered, for instance an XML document in WSDL for Web/OGSA services (McGovern et al., 2003; Sotomayor et al., 2005) or an interface description in IDL for CORBA (Ruh et al., 1999). Then, the requestor has to generate the proxy (a stub forwarding calls to a provider) based on the service description and the fixed communication protocol, for example SOAP in Web/OGSA services, IIOP in CORBA. This is called a bind operation. The binding operation is not needed in SOOA as the requestor holds the active surrogate object already created by the provider and obtained by the requestor from the registry. Web services and OGSA services cannot change the communication protocol between requestors and providers while the SOOA approach is protocol neutral (Waldo, 2008). In SOOA, the way an object proxy communicates with a provider is established by the contract between the provider and its published proxy and defined accordingly by the provider implementation. The proxy’s requestor does not need to know who implements the interface, how it is implemented, or where the provider is located — three neutralities of SOOA. So-called smart proxies, for example, provided by Jini ERI, can grant access to both local and remote resources. They can also communicate with multiple providers on the network regardless of who originally registered the proxy, thus separate providers on the network can implement different parts of the smart proxy interface(s). Communication protocols may also vary, and a single smart proxy can also talk over multiple protocols including application-specific protocols. SPOA and SOOA differ in their method of discovering the service registry. For example, SORCER uses dynamic discovery protocols to locate available registries (lookup services) as defined in the Jini architecture (2001). Neither the requestor who is looking up a proxy by its interfaces nor the provider registering a proxy needs to know specific registry locations. In SPOA, however, the requestor and provider usually do need to know the explicit location of the service registry — e.g., a URL for RMI registry (Pitt and McNiff, 2001), a URL for UDDI registry (McGovern et al., 2003), an IP address and port of a COS Name Server (Ruh et al., 1999) — to open a static connection and find or register a service. In deployment of Web and OGSA services, a UDDI registry can be omitted just by using WSDL files available directly from service developers. In SOOA, lookup services are mandatory due to the dynamic nature of object proxies registered by bootstrapping providers and identified by service types. Interactions with registries in SPOA are more like static client–server connections while in SOOA they are dynamic (Jini discovery/join protocols) as proxy registrations are leased to registering providers. Crucial to the success of SOOA is interface standardization. Services are identified by interface types (e.g., Java interfaces) and additional provider’s specific properties if needed; the exact identity of the service provider is not crucial to the
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
859
architecture. As long as services adhere to a given set of rules (common interfaces), they can collaborate to execute published operations, provided the requestor is authorized to do so. Let us emphasize the major distinction between SOOA and SPOA: in SOOA, a proxy is created and always owned by the service provider, but in SPOA, the requestor creates and owns a proxy which has to meet the requirements of the protocol that the provider and requestor agreed upon a priori. Thus, in SPOA the protocol is always a generic one (e.g., HTTP, SOAP, IIOP), reduced to a common denominator — one size fits all — that leads to inefficient network communication in many cases. In SOOA, each provider can decide on the most efficient protocol(s) needed for a particular service provider. Service providers in SOOA can be considered as independent network objects finding each other via service registries. These objects are identified by service types and communicating through message passing. A collection of these objects sending and receiving messages — the only way these objects communicate with one another — looks very much like a service object-oriented distributed system. However, do you remember the eight fallacies (Fallacies of Distributed Computing, 2008) of network computing? We cannot just take an object-oriented program developed without distribution in mind and make it a distributed system, ignoring the unpredictable network behavior. Most Remote Procedure Call (RPC) (Birrell and Nelson, 1983) systems, with notable exception of Jini (Edwards, 2000) and SORCER, hide the network behavior and try to transform local communication into remote communication by creating distribution transparency based on a local assumption of what the network might be. However, every single distributed object cannot do that in a uniform way as the network is a dynamic, heterogeneous, and unreliable. Thus a distributed system cannot be represented completely as a collection of independent objects, each of them incorporating a transparent and local view of the network. The network is dynamic, cannot preserve constant topology, and introduces latency for remote invocations. Network latency also depends on potential failure handling and recovery mechanisms, so we cannot assume that a local invocation is similar to remote invocation. Thus, complete distribution transparency — by making calls on distributed objects as though they were local — is impossible to achieve in practice. The network distribution is simply not just an object-oriented implementation of a isolated distributed objects; it is a metasystemic issue in object-oriented distributed programming. In that context, Web/OGSA services define independent distributed “objects,” but do not have anything common with dynamic objectoriented distributed systems that for example the Jini architecture emphasizes. Object-oriented programming can be seen as an attempt to abstract both data and related operations in an entity called object. Thus, an object-oriented program may be seen as a collection of cooperating objects communicating via message passing, as opposed to a traditional view in which a program may be seen as a list of instructions to the computer. Instead of objects and messages, in EO programming
March 15, 2010
860
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
(Sobolewski, 2008) service providers and exertions constitute a program. An exertion is a kind of meta-request sent onto the network. The exertion can be considered as the specification of distributed collaboration that encapsulates data, related operations, and control strategy. The operation signatures specify implicitly the required service providers on the network. The invoked (activated) exertion creates a federation of service providers at runtime to execute a collaboration according to the exertion’s control strategy. Thus, the exertion is the metaprogram and its metashell that submits the request onto the network to run the collaboration in which all federated providers pass to one another the component exertions only. This type of metashell that coordinates in runtime the execution of an exertion by federated providers was created for the SORCER metacompute operating system (see Fig. 3) — the exemplification of SOOA with autonomic management of system and domain-specific service providers to run EO programs. The SORCER environment, described in the next Section, defines the objectoriented distribution for EO programming. It uses indirect federated remote method invocation (Sobolewski, 2007) with no explicit location of service providers specified in exertions. A specialized infrastructure of distributed services provides support for management of exertions and services, the exertion shell, federated file system, service discovery/join, and the system services for coordination of executing runtime federations. That infrastructure defines SORCER’s object-oriented distributed modularity, extensibility, and reuse of providers and exertions — key features of object-oriented distributed programming that are usually missing in SPOA programming environments.
Figure 3. SORCER layered platform: P1 resources, P2 resource management, and P3 programming environment.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
861
3. Metacompute Grid SORCER is a federated service-to-service (S2S) metacomputing environment that treats service providers as network peers with well-defined semantics of a federated service object-oriented architecture (FSOOA). It is based on Jini (2001) semantics of services in the network and the Jini programming model (Edwards, 2000) with explicit leases, distributed events, transactions, and discovery/join protocols. While Jini focuses on service management in a networked environment, SORCER hides all low-level programming details of the Jini programming model and is focused on EO programming and the execution environment for exertions based on the Triple Command pattern (Sobolewski, 2007) presented in Sec. 5. As described in Sec. 2, SOOA consists of four major types of network objects: providers, requestors, registries, and proxies. The provider is responsible for deploying the service on the network, publishing its proxy to one or more registries, and allowing requestors to access its proxy. The requestor looks up proxies by sending queries to registries and making selections from the available services. Queries generally contain search criteria related to the type and quality of service. Registries facilitate searching by storing proxy objects with related attributes and making them available to requestors. Providers use discovery/join protocols to publish services on the network; requestors use discovery/join protocols to obtain service proxies on the network. The SORCER metacompute OS uses Jini discovery/ join protocols to implement its FSOOA. In FSOOA, a service provider is a remote object that receives exertions from service requestors to execute collaborations. An exertion encapsulates collaboration data, operations, and control strategy. A task exertion is an elementary service request, a kind of elementary remote instruction (elementary statement) executed by a single service provider or a small-scale federation. A composite exertion called a job exertion is defined hierarchically in terms of tasks and other jobs, including control exertions that manage the flow of control in a collaboration. A job exertion is a kind of network procedure executed by a large-scale federation. Thus, the executing exertion is a service-oriented program that is dynamically bound to all required and currently available service providers on the network. This collection of providers identified at runtime is called an exertion federation. While this sounds similar to the object-oriented paradigm, it really is not. In the objectoriented paradigm, the object space is a program itself; here the exertion federation is not the program, it is the execution environment for the exertion, and the exertion is the object-oriented program — the specification of service collaboration. This changes the programming paradigm completely. In the former case the object space is hosted by a single computer, but in the latter case the top-level and its component exertions along with related service providers are hosted by the network of computers. The overlay network of all service providers is called the service grid and an exertion federation is called a virtual metacomputer. The metainstruction set
March 15, 2010
862
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
of the metacomputer consists of all operations offered by all providers in the service grid. Thus, a service-oriented program is composed of metainstructions with its own service-oriented control strategy and service context representing the metaprogram data. Service signatures specify metainstructions in SORCER. Each signature primarily is defined by a service type (interface name), operation in that interface, and a set of optional attributes. Four types of signatures are distinguished: PROCESS, PREPROCESS, POSTPROCESS, and APPEND. A PROCESS signature — of which there is only one allowed per exertion — defines the dynamic late binding to a provider that implements the signature’s interface. The service context (Zhao and Sobolewski, 2001; Sobolewski, 2008b) describes the data that tasks and jobs work on. An APPEND signature defines the context received from the provider specified by this signature. The received context is then appended in runtime to the service context later processed by PREPROCESS, PROCESS, and POSTPROCESS operations of the exertion. Appending a service context allows a requestor to use actual shared network data with other requestors at runtime. A job exertion allows for a dynamic federation to transparently coordinate the execution of all component exertions within the service grid. Please note that these metacomputing concepts are defined differently in traditional grid computing where a job is just an executing process for a submitted executable code with no federation being formed for that code managed by the local operation system on a selected processor by the grid scheduler. An exertion can be activated by calling exertion’s exert operation: Exertion.exert(Transaction):Exertion, where a parameter of the Transaction type is required when a transactional semantics is needed for all nested exertions participating within the parent exertion’s collaboration. Thus, EO programming allows us to submit an exertion onto the network implicitly (no receiving provider identified a priori) and to perform execution of exertion’s signatures on various service providers in runtime by the exertion metashell. Top-level S2S communication between collaborating services is managed by rendezvous services through the use of the generic Servicer interface and the operation service that all SORCER services are required to provide: Servicer.service(Exertion, Transaction):Exertion. This top-level service operation takes an exertion as an argument and gives back an exertion as the return value. As every Servicer can accept any exertion, Servicers have well-defined roles in the S2S platform (see Fig. 3): 1. Taskers — process service tasks; 2. Jobbers — rendezvous providers that process service jobs; 3. Spacers — rendezvous providers that process tasks and jobs via shared exertion space for space-based computing (Freeman et al., 1999);
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
863
4. Contexters — provide service contexts for APPEND signatures; 5. FileStorers — provide access to federated file system providers (Sobolewski et al., 2003; Berger and Sobolewski, 2005, 2007a; Turner and Sobolewski, 2007); 6. Catalogers — SORCER services registries; 7. Persisters — persist service contexts, tasks, and jobs to be reused for interactive EO programming; 8. Relayers — gateway providers; transform exertions to native representation, for example integration with Web services and JXTA (2008); 9. Authenticators, Authorizers, Policers, KeyStorers — provide support for service security; 10. Auditors, Reporters, Loggers — support for accountability, reporting, and logging; 11. Griders, Callers, Methoders — support traditional compute grid; 12. ServiceTasker, ServiceJobber, and ServiceSpacer are basic three implementations of providers used to configure domain-specific providers via dependency injection — configuration files for smart proxying and embedding business objects, called service beans, into service providers. Also, domain-specific providers can subclass any of these three providers and implement required domain-specific interfaces with operations returning a service context and taking a service context as its single parameter. These domainspecific interfaces and operations are usually used in service task signatures; and 13. ServiceProviderBeans — to enable autonomic provisioning of service providers with the Rio framework (Project Rio, 2008). Service providers do not have mutual associations prior to the execution of an exertion; they come together dynamically (federate) for all nested tasks and jobs in the top-level exertion. Domain specific providers within the federation, or task peers, execute service tasks. Job collaborations are coordinated by rendezvous peers: a Jobber or Spacer, two of the SORCER platform system services. However, a job can be sent to any peer. A peer that is not a rendezvous peer is responsible for forwarding the job to an available rendezvous peer and returning results to the requestor. Thus implicitly, any peer can handle any exertion type. Once the exertion execution is complete, the federation dissolves and the providers disperse to seek other exertions to join. Exertions can be created interactively (Sobolewski and Kolonay, 2006) or programmatically (using SORCER API), and its execution can be monitored and debugged (Soorianarayanan and Sobolewski, 2004) in the overlay service network via service user interfaces (The Service UI Project, 2008) attached to providers and installed on-the-fly by generic service browsers (Inca XTM , 2008).
March 15, 2010
864
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
3.1. Federated file system The SILENUS federated file system (Berger and Sobolewski, 2005, 2007a) was designed and developed to provide data access for metaprograms. It expands the file store developed for FIPER (Sobolewski et al., 2003) with the true P2P services. The SILENUS system itself is a collection of service providers that use the SORCER framework for communication. In classical client–server file systems, a heavy load may occur on a single file server. If multiple service requestors try to access large files at the same time, the server will be overloaded. In a P2P architecture, every provider is a client and a server at the same time. The load can be balanced between all peers if files are spread across all of them. The SORCER architecture splits up the functionality of the metacomputer into smaller service peers (Servicers), and this approach was applied to the distributed file system as well. The SILENUS federated file system is comprised of several network services that run within the SORCER environment. These services include a byte store service for holding file data, a metadata service for holding metadata information about the files, several optional optimizer services, and fa¸cade (Grand, 1999) services to assist in accessing federating services. SILENUS is designed so that many instances of these services can run on a network, and the required services will federate together to perform the necessary functions of a file system. In fact, the SILENUS system is completely decentralized, eliminating all potential single point failures. SILENUS services can be broadly categorized into gateway components, data services, and management services. The SILENUS fa¸cade service provides a gateway service to the SILENUS grid for requestors that want to use the file system. As the metadata and actual file contents are stored by different services, there is a need to coordinate communication between these two services. The fa¸cade service itself is a combination of a control component, called the coordinator, and a smart proxy component that contains needed inner proxies provided dynamically by the coordinator. These inner proxies facilitate direct P2P communications for file upload and download between the requestor and SILENUS federating services like metadata and byte stores. Core SILENUS services have been deployed as SORCER services along with WebDAV and NFS adapters. The SILENUS file system scales well with a virtual disk space adjusted as needed by the corresponding number of required byte store providers and the appropriate number of metadata stores required to satisfy the needs of current users and service requestors. The system handles several types of network and computer outages by utilizing disconnected operation and data synchronization mechanisms (Berger and Sobolewski, 2007b). It provides a number of user agents including a zero-install file browser attached to the SILENUS fa¸cade. Also a simpler version of SILENUS file browser is available for smart MIDP phones.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
865
SILENUS supports storing very large files (Turner and Sobolewski, 2007) by providing two services: a splitter service and a tracker service. When a file is uploaded to the file system, the splitter service determines how that file should be stored. If a file is sufficiently large enough, the file will be split into multiple replicated parts, or chunks, and stored across many byte store services. Once the upload is complete, a tracker service keeps a record of where each chunk was stored. When a user requests to download the full file later on, the tracker service can be queried to determine the location of each chunk and the file can be reassembled to the original form. 4. EO Programming Each programming language provides a specific computing abstraction. Procedural languages are abstractions of assembly languages. Object-oriented languages abstract entities in the problem domain that refer to “objects” communicating via message passing as their representation in the corresponding solution domain, e.g., Java and SORCER objects. EO programming is a form of distributed programming that allows us to describe the distributed problem explicitly in terms of the intrinsic unpredictable network domain instead of in terms of conventional distributed objects that hide the notion of the network domain. What intrinsic distributed abstractions are defined in SORCER? Well, service providers are “objects,” but they are specific objects — they are network objects with leased network resources, a network state, network behavior, and network typs. There is still a connection to conventional distributed objects: each service provider looks like a remote object (data or compute node). However, service providers act as network peers with leased network resources; they implement the same toplevel interface; they are replicated and dynamically provisioned for reliability to compensate for network failures (Project Rio, 2008); they can be found dynamically at runtime by types they implement; they can federate for executing a specific network request called an exertion and process collaboratively nested (component) exertions. An exertion encapsulates in a modular way service data, operations, and requestor’s control strategy. The component exertions may need to share context data of ancestor exertions, and the top-level exertion is complete only if all nested exertions are successful. With that very concise introduction to the abstraction of EO programming, let us look into a simple analogy to Unix shell scripts execution and then in detail at how FSOOA is defined. Let’s first look at the EO approach to see how it works. EO programs consist of exertion objects called tasks and jobs. An exertion task corresponds to an individual network request to be executed on a service provider. An exertion job consists of a structured collection of tasks and other jobs. The data upon which to execute a task or job is called a service context. Tasks are analogous to executing a single program or command on a computer, and the service context would be the input and output
March 15, 2010
866
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
streams that the program or command uses. A job is analogous to a batch script that can contain various commands and calls to other scripts. Pipelining Unix commands allows us to perform complex tasks without writing complex programs. As an example, consider a script sort.sh connecting simple processes in a pipeline as follows: cat hello.txt|sort|uniq > bye.txt The script is similar to an exertion job in that it consists of individual tasks that are organized in a particular fashion. Also, other scripts can call the script sort.sh. An exertion job can consist of tasks and other jobs, much like a script can contain calls to commands and other scripts. Each of the individual commands, such as cat, sort, and uniq, would be analogous to a task. Each task works with a particular service context. The input context for the cat “task” would be the file hello.txt, and the “task” would return an output context consisting of the contents of hello.txt. This output context can then be used as the input context for another task, namely the sort command. Again the output context for sort could be used as the input context for the uniq task, which would in turn give an output service context in the form of bye.txt. To further clarify what an exertion is, an exertion consists mainly of three parts: a set of service signatures, which is a description of operations in a collaboration, the associated service context upon which to execute the exertion, and control strategy (default provided) that defines how signatures are applied in the collaboration. A service signature specifies at least the provider’s interface that the service requestor would like to use and a selected operation to run within that interface. There are four types of signatures that can be used for an exertion: PREPROCESS, PROCESS, POSTPROCESS, and APPEND. An exertion must have one and only one PROCESS signature that specifies what the exertion should do and who works on it. An exertion can optionally have multiple PREPROCESS, POSTPROCESS, and APPEND signatures that are primarily used for formatting the data within the associated service context. A service context consists of several data nodes used for either input, output, or both. A task may work with only a single service context, while a job may work with multiple service contexts as it can contain multiple tasks. The requestor can define a control strategy as needed for the underlying exertion by choosing relevant control exertion types and configuring attributes of service signatures accordingly (see Secs. 4.2, 4.4, and 4.5 for details). Here is the basic structure of the EO program that is analogous to the sort.sh script. 1. // Create service signatures 2. Signature catSignature, sortSignature, 3. uniqSignature;
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
4. 5. 6. 7. 8. 9.
867
catSignature = new ServiceSignature("Reader", "cat"); sortSignature = new ServiceSignature("Sorter", "sort"); uniqSignature = new ServiceSignature("Filter", "uniq");
10. 11. 12. 13. 14. 15.
// Create component exertions Task catTask, sortTask, uniqTask; catTask = new Task("cat", catSignature); sortTask = new Task("sort", sortSignature); uniqTask = new Task("uniq", uniqSignature);
16. 17. 18. 19. 20. 21.
// Create top-level exertion Job sortJob = new Job("main-sort"); sortJob.addExertion(catTask); sortJob.addExertion(sortTask); sortJob.addExertion(uniqTask);
22. 23. 24. 25. 26. 27.
// Create service contexts Context catContext, sortContext, uniqContext; catContext = new ServiceContext("cat"); sortContext = new ServiceContext("sort"); uniqContext = new ServiceContext("uniq");
28. 29. catContext.putInValue("/text/in/URL", 30. "http://host/hello.txt"); 31. catContext.putOutValue("/text/out/contents", 32. null); 33. 34. sortContext.putInValue("/text/in/contents", null); 35. sortContext.putOutValue("/text/out/sorted", null); 36. 37. uniqContext.putInValue("/text/in/sorted", null); 38. uniqContext.putOutValue("/text/out/URL", 39. "http://host/bye.txt"); 40.
March 15, 2010
868
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
41. //Map context outputs to inputs 42. catContext.map("/text/out/contents", 43. "/text/in/contents", sortContext); 44. sortContext.map("/text/out/sorted", 45. "/text/in/sorted", uniqContext); 46. 47. catTask.setContext(catContext); 48. sortTask.setContext(sortContext); 49. uniqTask.setContext(uniqContext); 50. 51. // exert collaboration 52. sortJob.exert(null); In the above EO program we create three signatures (lines 2–9), each signature is defined by the interface name and the operation name that we want to run by any remote object implementing the interface. We use the three signatures to create three tasks (lines 12–15) and by line 16, we have three separate commands cat, sort, and uniq to be used in the sort.sh script. The three tasks are combined into the job by analogy to piping Unix commands in the sort.sh script. Thus, by line 22, we have added these commands to sort.sh script, but have not provided input/output parameters nor piped them together: as is: cat sort uniq to be: cat hello.txt| sort| uniq > bye.txt Lines 24–39 create and define three service contexts for our three tasks. By line 40, we have specified some input and output parameters, but still no piping: as is : cathello.txt sort uniq bye.txt to be : cathello.txt| sort| uniq > bye.txt Lines 42–45 define mapping of context output to the related context input parameters. The parameters are context paths from a source context to a target context. The target context is the last parameter in the map operation. By line 50, we have piping setup and by the analogy our sort.sh script is complete now: as is : cathello.txt| sort| uniq > bye.txt On line 52, we execute the script. If we use the Tenex C shell (tcsh), invoking the script is equivalent to: tcsh sort.sh, i.e., passing the script sort.sh on to tcsh. Similarly, to invoke the exertion sortJob, we call sortJob.exert(null). Thus, the exertion is the program and the network shell at the same time, which might first come as a surprise, but close evaluation of this fact, shows it to be consistent with the meaning of object-oriented distributed programming. Here, the virtual metacomputer is a federation that does not exist
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
869
when the exertion is created. Thus, the notion of the virtual metacomputer is encapsulated in the exertion that creates the required federation on-the-fly. The federation provides the implementation (metacomputer instructions) as specified in signatures of the EO program before the exertion runs on the network. The sortJob program described above can be rewritten with just one exertion task only instead of exertion job as follows: 1. 2. 3. 4. 5. 6. 7. 8. 9.
// Create service signatures Signature catSignature, sortSignature, uniqSignature; catSignature = new ServiceSignature("Reader", "cat", Type.PREPROCESS); sortSignature = newServiceSignature("Sorter", "sort", Type.PROCESS); uniqSignature = new ServiceSignature("Filter", "uniq", Type.POSTPROCESS);
10. 11. 12. 13. 14. 15.
// Create an exertion task Task sortTask = new Task("task-sort"); sortTask.addSignature(catSignature); sortTask.addSignature(sortSignature); sortTask.addSignature(uniqSignature);
16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26.
// Create a service context Context taskContext = new ServiceContext("c-sort"); taskContext.putInValue("/text/in/URL", "http://host/hello.txt"); taskContext.putOutValue("/text/out/contents", null); taskContext.putOutValue("/text/out/sorted", null); taskContext.putOutValue("/text/out/URL", "http://host/bye.txt");
27. 28. sortTask.setContext(taskContext); 29. 30. // Activate the task exertion 31. sortTask.exert(null);
March 15, 2010
870
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
In this version of the sort.sh analogy — taskSort, we create three signatures (lines 2–9), but in this case three signature types are assigned, so we can batch them into a single task (lines 12–15). In the jobSort version all signatures are of the default PROCESS type and each task is created with its own context. Here we create one common taskContext (lines 18–26) that is shared by all signature operations. Finally, on line 31, we execute the exertion task sortTask. The major difference between the two EO programs jobSort and taskSort is in the exertion execution. The execution of jobSort is in fact coordinated by a Jobber, but the execution of the taskSort is coordinated by the service provider implementing the Sorter interface that binds to the PROCESS signature sortSignature. If the provider implementing the Sorter interface, implements also two other interfaces Reader and Filter, then the execution of taskSort is more efficient as all three operations can be executed by the same provider with no need of network communication between a Jobber and three collaborating providers in the jobSort federation. 4.1. Service Messaging and Exertions In object-oriented terminology, a message is the single means of passing control to an object. If the object responds to the message, it has an operation and its implementation (method) for that message. Because object data is encapsulated and not directly accessible, a message is the only way to send data from one object to another. Each message specifies the name (identifier) of the receiving object, the name (selector) of operation to be invoked, and its parameters. In the unreliable network of objects, the receiving object might not be present or can go away at any time. Thus, we should postpone receiving object identification as late as possible. Grouping related messages per one request for the same data set makes a lot of sense due to network invocation latency and common errors in handling. These observations lead us to service-oriented messages called exertions. An exertion encapsulates multiple service signatures that define operations, a service context that defines data, and a control strategy that defines how operations flow during exertion execution. Different types of control exertions (Sec. 4.4) can be used to define collaboration control strategies that can also be configured with signature flow type and access type attributes (see Sec. 4.2). Two basic exertion categories are distinguished: elementary and composite exertion called exertion task and exertion job, respectively. Corresponding task and job control strategies are described in Sec. 4.5. As explained in Sec. 3, an exertion can be activated by calling exertion’s exert operation: Exertion.exert(Transaction):Exertion, where a parameter of the Transaction type is required when a transactional semantics is needed for all participating nested exertions within the parent one. Thus, EO programming allows us to submit an exertion onto the network and to perform
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
871
executions of exertion’s signatures on various service providers, but where does the S2S communication come into play? How do these services communicate with one another if they are all different? Top-level communication between services, or the sending of service requests (exertions), is done through the use of the generic Servicer interface and the operation service that all SORCER services are required to provide — Servicer.service(Exertion, Transaction):Exertion. This top-level service operation takes an exertion as an argument and gives back an exertion as the return value. In Sec. 5 we describe how this operation is realized in the FMI framework. So why are exertions used rather than directly calling on a provider’s method and passing service contexts? There are three basic answers to this. First, passing exertions helps to aid with the network-centric messaging. A service requestor can send an exertion out onto the network — Exertion.exert() — and any service provider can pick it up. The receiving provider can then look at the interface and operation requested within the exertion, and if it does not implement the desired PROCESS interface or provide its desired method, it can continue forwarding it to another service provider who can service it. Second, passing exertions helps with fault detection and recovery. Each exertion has its own completion state associated with it to specify if it has yet to run, has already completed, or has failed. As full exertions are both passed and returned, the user can view the failed exertion to see what method was being called as well as what was used in the service context input nodes that may have caused the problem. As exertions provide all the information needed to execute an exertion including its control strategy, a user would be able to pause a job between tasks, analyze it and make needed updates. To figure out where to resume an exertion, the service provider would simply have to look at the exertion’s completion states and resume the first component one that was not completed yet. In other words, EO programming allows the user, not programmer to update the metaprogram on-thefly, what practically translates into creation new collaborative applications at the exertion runtime (Sobolewski and Kolonay, 2006). Third, the provider can analyze the received exertion for compliance with security polices before any of its signatures can be executed. In particular the Authenticator provider is used to check for the requestor proper identity, and the Authorizer provider is consulted if all exertion’s signatures are accessible to the requestor. 4.2. Service Signatures An activated exertion — Exertion.exert() — initiates the dynamic federation of all needed service providers dynamically — as late as possible — as specified by signatures of top-level and nested exertions. An exertion signature is compared to the operations defined in the service provider’s interface along with a set of signature
March 15, 2010
872
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
attributes describing the provider, and if a match is found, the appropriate operation can be invoked on the service provider. In FMI (Sobolewski, 2007) signatures specify indirect invocations of provider methods via the service operation of the top-level Servicer interface as described in Sec. 4.1. A service Signature is defined by • Signature name — a custom name • Service type name — a service name corresponding to the provider’s type (Java interface) • Selector of the service operation — an operation name defined in the service type • Operation type — Signature.Type: PROCESS (default), PREPROCESS, POSTPROCESS, APPEND • Service access type — Signature.Access: PUSH (default) — direct binding to service providers, or PULL — indirect binding via a shared exertion space maintained by the Spacer service • Flow of control type — Signature.FlowType: SEQUENTIAL (default), PARALLEL, CONCURRENT • Priority — integer value used by exertion’s control strategy • Execution time flag — if true, the execution time is returned in the service context • Notifyees — list of email addresses to notify upon exertion completion • Service attributes — required requestor’s attributes matching provider’s registration attributes An exertion can comprise of a collection of PREPRROCESS, POSTPROCESS, and APPEND signatures, but having only one PROCESS signature. The PROCESS signature defines the binding provider for the exertion. An APPEND signature defines the service context received from the provider specified by this signature. The received context is then appended in runtime to the exiting context that is processed later by PREPROCESS, PROCESS, and POSTPROCESS operations of the exertion. Appending a service context allows a requestor to use actual network data in runtime not available to the requestor when the exertion is activated. Different languages have different interpretations as to what constitutes and operation signature. For example, in C+ and Java the return type is ignored. In FMI the parameters and return type are all of the Context type. Using the UML advanced operation syntax, the exertion operation (prefixing it with the <<service>> stereotype and postfixing with tagged values) can be defined as follow: Context {interface = service-type-name, type = operation-type, access = access-type, flow = flow-type, priority = integer, timing = boolean, notfiees = notfiees-list, attributes = registration-attribute-list}.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
873
4.3. Service Contexts A service context, or simply a context, defined by the Context interface, is a data structure that describes service provider ontology along with related data. A provider’s ontology is controlled by the provider vocabulary that describes data and the relations between them in a provider’s namespace within a specified service domain of interest. A requestor submitting an exertion to a provider has to comply with that ontology as it specifies how the context data is interpreted and used by the provider. In service context, attributes and their values are used as atomic conceptual primitives, and complements are used as composite ones. A complement is an attribute sequence (path) with a value at the last position. A context consists of a subject complement and a set of defining context complements. The context usually corresponds to a sentence of natural language (a subject with multiple complements) (Sobolewski, 1991). A service context is a tree-like structure described conceptually by the EBNF syntax specification as follows: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.
context = [subject ":"] complement {complement}. subject = element. complement = element ";". element= path ["=" value]. path = ["/"] attribute {"/" attribute} [{"<" association ">"}] [{"/" attribute}]. value = object. attribute = identifier. relation = domain "|" product. association = domain "|" tuple. product = attribute {"|" attribute}. tuple = value {"|" value}. attribute = identifier. domain = identifier. association = identifier. identifier = letter {letter|digit}.
A relation with a single attribute taking values, is called a property and is denoted as attribute|attribute. To illustrate the idea of service context, let’s consider the following example (graphically depicted in Fig. 4 where the subject /laboratory/name=SORCER is indicated in green color and the association person in red): /laboratory/name=SORCER: /university=TTU; /university/department/name=CS;
March 15, 2010
874
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski SORCER name laboratory university TTU
director department p
name
person email
room
CS
number 20B
Mike | W | Sobolewski
[email protected] phone
number
ext
806-742-1194
237
Figure 4. An example of a service context.
/university/department/room; number=20B; phone/number=806-742-1194; phone/ext=237; /director < person|Mike|W|Sobolewski > / [email protected]; where absolute and relative paths are used, and the relation person is defined as follows: person|firstname|initial|lastname and the following properties are used: firstname, initial, lastname, name, university, email, number, ext. A context leaf node or data node is where the actual data resides. All absolute context paths define a service namespace. The context namespace with data nodes appended to its context paths is called a context model, or simply a context. A context path is a hierarchical name for a data item in a leaf node. Note that a limited service context can be represented as an XML document — what has been done in SORCER for interoperability — but the power of the Context type comes from the fact that any Java object can be naturally used as a data node. In particular, exertions themselves can be used as data nodes and then executed by providers as needed to run complex iterative programs, e.g., non-linear multidisciplinary optimization (Kolonay et al., 2002).
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
875
4.4. Exertion Types A Task instance specifies an elementary step in EO program. It is an analog of a statement in procedural programming languages (see the examples in Sec. 4). Thus, it is a minimal unit of structuring in EO programming. If the provider binds to a Task, it has a method for the task’s PROCESS signature. Other signatures associated with the Task exertion provide for appending, preprocessing, and postprocessing service contexts by the same provider or its collaborating providers. An APPEND signature defines the context received from the provider specified by this signature. The received context is then appended at runtime to the task context later processed by PREPROCESS, PROCESS, and POSTPROCESS operations of the task. Appending a service context allows a requestor to use shared network data in runtime. A Task is the single means of passing control to an application service provider in FSOOA. Note that a task can specify a batch of operations that operate on the same service context — a Task’s shared execution state. All operations of the task, which are defined by its signatures, can be executed by the receiving provider or a group of federating providers coordinated by the provider receiving the task. A Job instance specifies a “block” of task and other jobs. It is the analog of a procedure in imperative programming languages. In EO programming it is a composite of exertions that makeup the network collaboration. A Job can reflect a workflow with branching and looping by using control exertions (see Fig. 5). The following control exertions define algorithmic logic in EO programming: IfExertion, WhileExertion, ForExertion, DoExertionThrowExertion, TryExertion, BreakExertion, ContinueExertion. Currently implemented in SORCER exertions including control types are depicted in Fig. 5. 4.5. Exertion Control Strategies In Secs. 4.1 and 4.2, top-level exertion messaging and service signatures were described. This section will present how they are used, at the task level and job level, to manage flow of control in EO programs. Before we delve into a task and job execution strategy, let us look at three related infrastructure providers identified by the following interfaces: Jobber, Spacer, and Cataloger. To begin processing a job, a service requestor must exert the job that finds its way dynamically to either a Jobber or Spacer service using: Exertion.exert(Transaction) : Exertion. The Jobber is responsible for coordinating the execution of the job, much like a command shell coordinates the execution of a batch script (see the programming examples in Sec. 4). The Jobber acts as a service broker by calling upon the proper service providers to execute the component exertions within the given job.
March 15, 2010
876
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
Figure 5. Exertion types including control exertions that allow for algorithmic logic in EO programming.
The Jobber can dispatch nested service requests either directly, when the jobber finds a proper provider by way of a Cataloger service or falling back to the Jini lookup service, or it can be dispatched indirectly via a shared exertion space through the use of a Spacer service. SORCER extends the discovery and registration capabilities of the SOA through the use of a service called the Cataloger service. A cataloger service looks
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
877
through all the Jini lookup services that it is aware of and requests all the SORCER service registrations it can get. The cataloger organizes these registrations that include service proxies, into groups of the same type. Whenever a service requestor needs a certain service, it can go to a cataloger instead of a lookup service to find what it needs. The cataloger will distribute registrations for the same service in a round-robin fashion to help balance the load between service providers of the same service type. SORCER also extends task/job execution abilities through the use of a Spacer service. The Spacer service can drop an exertion into a shared object space, provided by the Jini JavaSpaces service (Freeman et al., 1999), in which several providers can retrieve relevant exertions from the object space, execute them, and return the results back to the object space. As described before, an exertion is associated with a collection of signatures. There is only one PROCESS signature in this collection and multiple instances of APPEND, PREPROCESS, and POSTPROCESS signatures. The PROCESS signature is responsible for binding to the service provider that executes the exertion. The exertion activated by a service requestor can be submitted directly or indirectly to the matching service provider. In the direct approach, when signature’s access type is PUSH, the exertion’s ServicerAccessor (see Fig. 6) finds the matching service provider against the service type and attributes of the PROCESS signature and submits the exertion to the found provider. Alternatively, when signature’s access type is PULL, a ServiceAccessor can use a Spacer provider that simply drops the exertion into the shared exertion space to be pulled by matching providers. Each SORCER service provider looks continuously into the space for exertions that match a provider’s interfaces and attributes. Each service provider that picks up a matched exertion from the exertion space returns the exertion being executed back into the space, then the requestor (Tasker, Jobber, or Spacer) picks up the executed exertion from the space. The exertion space provides a kind of automatic load balancing — the fastest available service provider gets an exertion from the space and joins the exertion’s federation. When a receiving service provider gets a task (directly or indirectly), then the task signatures are executed as follows: • First, all APPEND signatures are processed by the receiving provider in the order specified in the task. The order of signatures is defined by signature priorities, if the task’s flow type is SEQUENTIAL; otherwise they are dispatched in parallel. As a result, the task’s service context is appended with dynamic data delivered from context providers specified by the append signatures. Appended complementary shared contexts are managed by the receiving provider according to the remote Observer/Observable design pattern (Grand, 1999). • Second, all PREPROCESS signatures are executed in the order specified in the task. The order is defined as in (1) above. In the result the task context is ready for applying its PROCESS method.
March 15, 2010
878
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
Figure 6. The primary types of SORCER providers: Tasker, Jobber, and Spacer with supporting ServicerAccessor.
• Third, the PROCESS signature is executed and results are captured in the task context including any exceptions and errors. • Forth, all POSTPROCESS signatures are executed in the order specified in the task. The order is defined as in (1) above. Finally the resulting task with the processed context is returned to the requestor. A domain-specific provider calls by reflection the method specified in the exertion PROCESS signature (interface and selector). All application domain methods that are used in exertion signatures have the same signature: a single Context type parameter and a Context type return vale. Thus, a domain-specific interface looks like a common Java RMI interface with the above simplification on the common signature for all application-specific operations defined in the provider remote interfaces.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
879
The default job’s PROCESS signature defines a runtime binding to a Jobber. Alternatively the Spacer interface can be used in a job PROCESS signature. Two major parameters: job PROCESS signature’s access type and its flow type determine the top-level control strategy. Additionally, job’s service context, called a control context, defines job’s execution preferences. When a Jobber or Spacer gets an exertion job then a relevant dispatcher is assigned by a dispatcher factory that takes into account job’s access type, flow type, and its control context configuration. In the SORCER environment there are 12 types of dispatchers that implement different types of control strategies. The assigned dispatcher manages the execution of the job’s component exertions either sequentially or in parallel (depending on the value of flow type), and accessing collaborating providers either directly or indirectly (depending on the value of access type). The default toplevel control strategy implements a master/slave computing model with sequential or parallel execution of slave exertions with the master exertion executed as the last one, if any. In general, full algorithmic logic operations: concatenation, branching, and looping are supported. A job’s workflow can be defined in terms of control exertions defined in Sec. 4.4. The access types of job signatures specify the way a jobber or spacer accesses collaborating service providers: directly or indirectly. Thus the Spacer provider is usually used for asynchronous access and the Jobber service is usually used to access needed service providers synchronously. 4.6. S2S Infrastructure Exertion tasks are usually executed by service providers of the Tasker type and exertion jobs by rendezvous providers of Jobber or Spacer type. While a Tasker manages a single service context for the received task, a rendezvous provider manages a shared context (shared execution state) for the job federation and provides substitutions for input parameters that are mapped to output parameters (see the first programming example in Sec. 4) in service contexts of component exertions. Either one, a Tasker or rendezvous provider creates a federation of required service providers in runtime, but federations managed by rendezvous providers are usually larger in size than those managed by Taskers. All SORCER service providers implement the top-level Servicer interface. A peer of the Servicer type that is unable to execute an Exertion for any reason forwards the Exertion to any available Servicer matching the exertion’s PROCESS signature and returns the resulting exertion back to its requestor. Thus, each Servicer can initiate a federation created in response to Servicer.service(Exertion, Transaction). Servicers come together to form a federation participating in collaboration for the activated exertion. When the exertion is complete, Servicers leave the federation and seek a new exertion to join. Note that the same exertion can form a different federation for each execution due to the dynamic nature of looking up Servicers by their
March 15, 2010
880
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
required interfaces. As every Servicer can accept any exertion, many specialized Servicers have well defined roles in FSOOA as described in Sec. 3. 5. The Triple Command Pattern Polymorphism let us encapsulate a request then establish the signature of operation to call and vary the effect of calling the underlying operation by varying its implementation. The Command design pattern (Grand, 1999) establishes an operation signature in a generic interface and defines various implementations of the interface. In FMI, the three interfaces are defined with the following three commands: 1. Exertion.exert(Transaction):Exertion — join the federation; 2. Servicer.service(Exertion, Transaction):Exertion — request a service in the federation from the top-level Servicer obtained for the activated exertion; 3. Exerter.exert(Exertion, Transaction):Exertion — Execute the argument Exertion by the target provider in the federation. These three commands define the Triple Command pattern that makes EO programming possible via various implementations of the three interfaces: Exertion, Servicer, and Exerter. The FMI approach allows for • The P2P environment via the Servicer interface • Extensive modularization of programming P2P collaborations by the Exertion type • The customized execution of exertions by providers of the Exerter type • Common synergistic extensibility (exertions, servicers, exerters) from the triple design pattern Thus, requestors can exert simple (tasks) and structured metaprograms (jobs with control exertions) with or without transactional semantics as specified in (1) above. The Triple Command pattern in SORCER works as follows: 1. An exertion is invoked by calling Exertion.exert (Transaction). The exert operation implemented in ServiceExertion uses Servicer Accessor to locate in runtime the provider matching the exertion’s PROCESS signature. If a Subject in the exertion is not set, the requestor has to authenticate itself with the Authenticator service. A Subject represents a grouping of related security information (public and private credentials) for the requestor. After the successful requestor’s authentication the Subject instance is created and the exertion can be passed onto the network. 2. If the matching provider is found, then on its access proxy (that can also be a smart proxy) the Servicer.service(Exertion, Transaction) method is invoked. The matching provider first verifies with the Authorizer service if
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
881
the exertion’s Subject is authorized to execute the operation defined by the exertion’s PROCESS signature. 3. When the requestor is authenticated and authorized by the Servicer to invoke the method defined by the exertion’s PROCESS signature, then the Servicer calls its Exerter operation: Exerter.exert(Exertion, Transaction). 4. Exerter.exert method calls exert on Tasker, Jobber, or Spacer depending on the type of the exertion (Task or Job) and its control strategy. Permissions to execute the remaining signatures, if any, of APPEND, PREPROCESS, and POSTPROCESS types are checked with the Authorizer service. If all of them are authorized, then the provider calls the APPEND, PREPROCESS, PROCESS, and POSTPROCESS methods as described in Sec. 4.5. In the FMI approach, a requestor can create an exertion, composed from any hierarchically nested exertions, with required service contexts. The provider’s object proxy, service context template, and registration attributes are network-centric; all of them are part of the provider’s registration, so they can be accessed via Cataloger or lookup services by requestors on the network, for example service browsers (Inca XTM , 2008), or custom service UI agents (The Service UI Project, 2008). In SORCER, using these zero-install service UIs, the user can define data nodes in downloaded service context templates directly from providers and create related tasks/job interactively to be executed and monitored on the virtual metacomputer. Individual service providers either Taskers or rendezvous peers, implement their own exert(Exertion, Transaction) method according to their service semantics and related control strategy. SORCER taskers, jobbers, and spacers are implemented by ServiceTasker, ServiceJobber, and ServiceSpacer classes respectively (see Fig. 6). A SORCER domainspecific provider can be a subclass of ServiceTasker, ServiceJobber, or ServiceSpacer. Alternatively, any of these three providers can be set up as an application provider by dependency injection — using the Jini configuration methodology. Twelve proxying methods have been developed in SORCER to configure off-the-shelf ServiceTasker, ServiceJobber, or ServiceSpacer. In general, many different implementations of taskers, jobbers, and spacers can be used in the SORCER environment with different implementations of exertions. A service requestor via related attributes in its signatures will make appropriate runtime choices as to what implementations to run in exertion collaboration. Invoking an exertion, let us say program, is similar to invoking an executable program program.exe at the command prompt. If we use the Tenex C shell (tcsh), invoking the program is equivalent to: “tcsh program.exe,” i.e., passing the executable program.exe to tcsh. Similarly, to invoke a metaprogram using FMI, in this case the exertion program, we call
March 15, 2010
882
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
Figure 7. A job federation. The solid line (the first from the left) indicates the originating FMI invocation: Exertion.exert(Transaction). The top-level job with component exertions is depicted below the service grid (a cloud). Late bindings of all signatures are indicated by dashed lines that define the job’s federation (metacomputer).
“program.exert(null),” if no transactional semantics is required. Thus, the exertion is the metaprogram and the network shell of the SORCER metaoperating system. Here, the virtual metacomputer is a federation that does not exist when the exertion is created. Thus, the notion of the virtual metacomputer is encapsulated in the exertion that is managed by the FMI framework. In Fig. 7 a cloud represents a service grid while the metacomputer is a subset of providers that federate for the job shown below the cloud. The fact that the exertion is the metaprogram and the network shell at the same time brings us back to the distribution transparency issue discussed in Sec. 2. It might appear that Exertion objects are network wrappers as they hide network intrinsic unpredictable behavior. However, exertions are not distributed objects, as they do not implement any remote interfaces; they are local objects representing network requests only. Servicers are distributed objects, but Servicers collaborate dynamically with other FMI infrastructure providers addressing real aspects of networking. The network intrinsic unpredictable behavior is addressed by the SORCER distributed objects: Taskers, Jobbers, Spacers, Catalogers, FileStorers, Authenticators, Authorizers, KeyStorers, Policers, etc. (see Fig. 3) that define
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
883
metacomputing operating system services. The FMI infrastructure facilitates EO programming and concurrent metaprogram execution using the presented framework and allows for constructing large-scale reliable object-oriented distributed systems from unreliable distributed components — Servicers. 6. Conclusions A distributed system is not just a collection of independent distributed objects — it is the network of dynamic objects that come and go. From the object-oriented point of view, the network of dynamic objects is the problem domain of object-oriented distributed system that requires relevant abstractions in the solution space — for example the presented FMI framework. The exertion-based programming introduces the new abstraction of the solution space with service providers and exertions instead of object-oriented conventional objects and messages. Exertions not only encapsulate operations, data, and control strategies; they encapsulate relevant federations of dynamic service providers as well. Service providers can be easily deployed in SORCER by injecting implementation of domain-specific interfaces into the FMI framework. These providers register proxies, including smart proxies, via dependency injection using 12 methods investigated in SORCER laboratory already. Executing a top-level exertion means sending it onto the network and forming a federation of currently available infrastructure (FMI) and domain-specific providers at runtime. The federation works on service contexts of all nested exertions collaboratively as specified by control strategies of the top-level and component exertions. The fact that control strategy is exposed directly to the user in a modular way allows him/her to create new distributed applications on-the-fly. For the updated exertion and its refined control strategy, the created federation becomes the new implementation of the applied exertion — a truly adaptable exertion — oriented application. When the federation is formed then each exertion operation has its corresponding method (code) on the network available. Services, as specified by exertion signatures, are invoked only indirectly by passing exertions on to providers via service object proxies that in fact are access proxies allowing for service providers to enforce security policies on access to required services. If the access to use the operation is granted, then the operation defined by an exertion’s PROCESS signature is invoked by reflection. The FMI framework allows for the P2P computing via the Servicer interface, extensive modularization of Exertions and Exerters, and extensibility from the Triple Command design pattern. The presented EO programming methodology has been successfully deployed and tested in multiple concurrent engineering and large-scale distributed applications (Burton et al., 2002; Kolonay et al., 2002, 2007; Sampath et al., 2002; Kao et al., 2003; Lapinski and Sobolewski, 2003; Khurana et al., 2005; Sobolewski and Kolonay, 2006; Berger and Sobolewski, 2007a; Turner and Sobolewski, 2007; Goel et al., 2005, 2007, 2008).
March 15, 2010
884
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
Acknowledgments This work was partially supported by Air Force Research Lab, Air Vehicles Directorate, Multidisciplinary Technology Center, the contract number F33615-03-D3307, Algorithms for Federated High Fidelity Engineering Design Optimization. I would like to express my gratitude to all those who helped me in my SORCER research. I would like to thank all my colleagues at AFRL/RBSD and the SORCER Lab, TTU. They shared their views, ideas, and experience with me, and I am very thankful to them for that. Especially I would like to express my gratitude to Dr. Ray Kolonay, my technical advisor at AFRL/RBSD for his support, encouragement, and advice. References Berger, M and M Sobolewski (2005). SILENUS — A federated service-oriented approach to distributed file systems. In Next Generation Concurrent Engineering, Sobolewski, M and P Ghodous (eds.), 89–96. ISPE, Inc./Omnipress, ISBN 0-9768246-0-4. Berger, M and M Sobolewski (2007a). Lessons learned from the SILENUS federated file system. In Complex Systems Concurrent Engineering, Loureiro G and R Curran (eds.), 431–440. Springer Verlag, ISBN: 978-1-84628-975-0. Berger, M and M Sobolewski (2007b). A dual-time vector clock based synchronization mechanism for key-value data in the SILENUS file system. In IEEE Third International Workshop on Scheduling and Resource Management for Parallel and Distributed Systems (SRMPDS’07). Taiwan: Hsinchu. Birrell, AD and BJ Nelson (1983). Implementing Remote Procedure Calls, XEROX CSL83-7. Burton, SA, R Tappeta, RM Kolonay and D Padmanabhan (2002). Turbine blade reliabilitybased optimization using variable-complexity method. In 43rd AIAA/ASME/ASCE/ AHS/ASC Structures, Structural Dynamics, and Materials Conference. Denver, Colorado, AIAA 2002-1710. Edwards, WK (2000). Core Jini, 2nd edn. Prentice Hall. Fallacies of distributed computing. http://en.wikipedia.org/wiki/Fallacies of Distributed Computing [15 January 2008] FIPER: Federated intelligent product environment. http://sorcer.cs.ttu.edu/fiper/fiper.html. [15 January 2008]. Foster, I, C Kesselman and S Tuecke (2001). The Anatomy of the Journal of the Supercomputer Applications, 15(3). Freeman, E, S Hupfer and K Arnold (1999). JavaSpacesTM Principles, Patterns, and Practice. Addison-Wesley, ISBN: 0-201-30955-6. Goel, S, S Talya and M Sobolewski (2005). Preliminary design using distributed servicebased computing. In Proceeding of the 12th Conference on Concurrent Engineering: Research and Applications, 113–120. ISPE, Inc. Goel, S, SS Shashishekara Talya and M Sobolewski (2007). Service-based P2P overlay network for collaborative problem solving. Decision Support Systems, 43(2), 547–568.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
885
Goel, S, SS Talya and M Sobolewski (2008). Mapping engineering design processes onto a service-grid: Turbine design optimization. International Journal of Concurrent Engineering: Research & Applications, Concurrent Engineering, 16, 139–147. Grand, M (1999). Patterns in Java, Vol. 1. Wiley, ISBN: 0-471-25841-5. Inca XTN Service Browser for Jini Technology (2008). http://www.incax.com/index.htm? http://www.incax.com/service-browser.htm. [15 January 2008]. Jini Architecture Specification, Version 2.1 (2001). Sun Microsystems Inc., http://www. sun.com/software/jini/specs/jini1.2html/jini-title.html. [15 January 2008]. JXTA (2008). Sun Microsystems Inc., https://jxta.dev.java.net/. [15 January 2008]. Kao, KJ, CE Seeley, S Yin, RM Kolonay, T Rus and MJ Paradis (2003). Business-tobusiness virtual collaboration of aircraft engine combustor design. In Proceedings of DETC’03 ASME 2003 Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Chicago, IL. Khurana, V, M Berger and M Sobolewski (2005). A federated grid env. with replication services. In Next Generation Concurrent Engineering, Sobolewski, M and P Ghodous, 97–103. ISPE Inc./Omnipress. ISBN 0-9768246-04. Kolonay, RM, M Sobolewski, R Tappeta, M Paradis and S Burton (2002). Network-centric MAO environment. The Society for Modeling and Simulation International, Western Multiconference, San Antonio, TX. Kolonay, RM, ED Thompson, JA Camberos and F Eastep (2007). Active control of transpiration boundary conditions for drag minimization with an Euler CFD solver, AIAA2007-1891. In 48th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference. Honolulu, Hawaii. Lapinski, M and M Sobolewski (2003). Managing notifications in a federated S2S environment. International Journal of Concurrent Engineering: Research & Applications, 11, 17–25. McGovern, J, S Tyagi, ME Stevens and S Mathew (2003). Java Web Services Architecture. Morgan Kaufmann. Nimrod: Tools for Distributed Parametric Modelling. http://www.csse.monash.edu.au/ ∼davida/nimrod/nimrodg.htm. [5 March 2008]. Package net.jini.jeri. Sun Microsystems Inc., http://java.sun.com/products/jini/2.1/doc/api/ net/jini/jeri/package-summary.html. [15 January 2008]. Pitt, E and K McNiff (2001). java.rmi: The Remote Method Invocation Guide. AddisonWesley Professional. Project Rio, A Dynamic Service Architecture for Distributed Applications. https:// rio.dev.java.net/. [15 January 2008]. R¨ohl, PJ, RM Kolonay, RK Irani, M Sobolewski and K Kao (2000). A federated intelligent product environment, AIAA-2000-4902. In 8th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization. Long Beach, CA, September 6–8. Ruh, WA, T Herron and P Klinker (1999). IIOP Complete: Understanding CORBA and Middleware Interoperability. Addison-Wesley. Sampath, R, RM Kolonay and CM Kuhne (2002). 2D/3D CFD design optimization using the federated intelligent product environment (FIPER) technology. AIAA-2002-5479. In 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization. Atlanta, GA.
March 15, 2010
886
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
M. Sobolewski
Sobolewski, M (2008). Federated collaborations with exertions. In 17h IEEE International Workshop on Enabling Technologies: Infrastructures for Collaborative Enterprises, 127–137. Roma, Italy. Sobolewski, M (1991). Percept conceptualizations and their knowledge representation schemes, Ras, ZW and M Zemankova (eds.) In Methodologies for Intelligent Systems, Lecture Notes in AI 542, 236–245. Berlin: Springer-Verlag. Sobolewski M (2002). Federated P2P services in CE environments. Advances in Concurrent Engineering, Jardim-Gon¸calves, R, R Roy and A Sterger-Gar¸ca˜ o (eds.) 13–22. A.A. Balkema Publishers. Sobolewski, M (2007). Federated method invocation with exertions. In Proceedings of the 2007 IMCSIT Conference, 765–778, PTI Press, ISSN 1896–7094. Sobolewski, M (2008a). SORCER: Computing and metacomputing intergrid. In 10th International Conference on Enterprise Information Systems. Barcelona, Spain. Sobolewski, M (2008b). Exertion oriented programming. IADIS, 3(1), 86–109, ISBN: ISSN: 1646-3692. Sobolewski, M and R Kolonay (2006). Federated grid computing with interactive serviceoriented programming. International Journal of Concurrent Engineering: Research & Applications. 14(1), 55–66. Sobolewski, M, S Soorianarayanan and R-K Malladi-Venkata (2003). Service-oriented file sharing. In Proceedings of the IASTED International Conference on Communications, Internet, and Information Technology, 633–639, ACTA Press. Soorianarayanan, S and M Sobolewski (2004). Monitoring Federated Services in CE, Concurrent Engineering: The Worldwide Engineering Grid, 89–95, Tsinghua Press and Springer Verlag. SORCER Research Group. Texan Tech University. http://sorcer.cs.ttu.edu/. [15 January 2008]. SORCER Research Topics. Texan Tech University. http://sorcer.cs.ttu.edu/theses/. [15 January 2008] Sotomayor, B and L Childers (2005). Globus Toolkit 4: Programming Java Services, Morgan Kaufmann. Thain, D, T Tannenbaum and M Livny (2003). Condor and the grid. In Fran B, JGH Anthony and F Geoffrey (eds.), Grid Computing: Making The Global Infrastructure, A Reality. John Wiley. Turner, A and M Sobolewski (2007). FICUS — A federated service-oriented file transfer framework. In Complex Systems Concurrent Engineering, G Loureiro and R Curran (eds.), 421–430. Springer Verlag, ISBN: 978-1-84628-975-0. The Service UI Project. http://www.artima.com/jini/serviceui/index.html. [15 January 2008]. Waldo, J (2008). The End of Protocols. http://java.sun.com/developer/technicalArticles/jini/ protocols.html. [15 January 2008]. Zhao, S and M Sobolewski (2001). Context model sharing in the FIPER environment. In Proceedings of the 8th International Conference on Concurrent Engineering: Research and Applications. Anaheim, CA.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
Object-Oriented Metacomputing with Exertions
887
Biographical Note Dr. M. Sobolewski joined as a Professor, the Computer Science Department, Texas Tech University in September 2002. He is the Principal Investigator and Director of the SORCER laboratory focused on research in network, service, and distributed object-centric programming, and metaprogramming. While at GE Global Research Center he was the Chief Architect of the Federated Intelligent Product EnviRonment (FIPER) project, and developed other 17 successful distributed systems for various GE business components. Prior to coming to U.S., during 18-year career with the Polish Academy of Sciences, Warsaw, Poland, he was the head of the Picture Recognition and Processing Department, the head of the Expert Systems Laboratory, and was doing research in the area of knowledge representation, knowledge-based systems, pattern recognition, image processing, neural networks, and graphical interfaces. He has served as visiting professor, lecturer and consultant in Sweden, Finland, Italy, Switzerland, Germany, Hungary, Czechoslovakia, Poland, Russia, and USA. He has over 30 years of experience in the development of large scale computing systems.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch35
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
Chapter 36
A New B2B Architecture Using Ontology and Web Services Technology YOUCEF AKLOUF University of Science and Technology, USTHB, Algeria BP 32 El Alia 16112 Bab Ezzouar, Alger [email protected] [email protected]
This chapter proposes an approach that integrates both ontologies and web services technologies in a business-to-business product exchange model. It defines a platform based on three levels required to build a generic and a new way for inter-enterprise exchange. The first level of this architecture is the discovering part, useful for localizing partners of exchange. The second level describes the business process part which provides a choreography of exchanges and the third level presents the content part that details the product characterization data. To support our findings, architecture is developed based on web services technology which is currently in exponential evolution. They allow systems to communicate with each other using standard internet technologies. Systems that have to communicate with other systems use communication protocols and the data formats that both systems understand. Such web services interest has coincided with the proliferation of XML, Java technology, and business-to-business commerce. Keywords: B2B; business processes; Java; ontology; PLIB; web services.
1. Introduction E-business consortia are developing business process (BP) standards and specifications to allow partners involved in the exchange relationship to interact in a reliable and an interoperable manner. Over the last 20 years, e-commerce has been recognized as an efficient tool for handling complex exchanges and transactions between companies. E-commerce is becoming even more important in developing new activities and new models, especially for business-to-business (B2B) interactions. B2B is concerned with all activities involved in obtaining information about products and services and managing these information flows between organizations. The B2B architectures are difficult to conceptualize because they handle several scenarios as a BP and several contents with different formats defined separately by different organizations. 889
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
890 Y. Aklouf
So, to ensure that e-commerce systems are well-equipped to address these different issues, it will be necessary to have a new view of how we describe all the concepts involved in such interactions. The most concepts related to e-commerce model are a BP and the content (payload or useful data). These two pieces of information are defined by each standard in a specific format. This chapter proposes an adaptive architecture that gathers product catalogue standards with horizontal B2B standards in order to automate exchange and purchasing operations. A number of standards try to define a global and a generic architecture to consider large industry sectors and areas. The old one is the UN initiative: Electronic Data Interchange for Administration, Commerce, and Transport (EDIFACT) (United Nations, 1999). The shortcomings of this generation of approaches like EDIFACT are that they require a significant programming effort from organizations to be able to meet these standards (the effort which is required to adopt and implement such solution). So, the cost will be higher for this reason. EDIFACT has just been used by a small number of companies and were not allowed for small organizations. To overcome this limit and close this gap in order to reduce costs and improve the quality of interactions and communications between partners, a new standard was used in the same time with the development of the internet infrastructure. For an example, we have some e-commerce portals for online purchasing and ordering product from online catalogues: harbinger.net, mysap.com, verticalnet.com, in which transaction content specifications can be rendered easily by standard browsers. In addition to XML and EDI standards, a proprietary standard (de facto) exists like RosettaNet which is a B2B architecture used for IT and semiconductor content. The BP model of RosettaNet, named PIP, can be used with other catalogues. ebXML is a horizontal model defined without any relation with any product catalogues; it just describes some specifications for BP, the core components, registry and all the pieces required to implement a B2B system. Therefore, on top of the XML structure, two kinds of knowledge need to be modeled and shared across all the partners involved in B2B transactions: business process knowledge and product knowledge. Such shared conceptualizations are usually called ontologies. Thus, two kinds of standard ontologies have to be developed: ontology of products and services on the one hand and ontologies of business processes on the other hand. Building or creating a B2B system will be done using the existing standards and overcoming their limits and sharing the different advantages from these standards. And ensure interoperability, which is a key issue allowing the possibility to share several product catalogues with the same architecture. This is the main reason why we propose in this chapter to use product ontology for characterizing product catalogues and trying to define BP ontology for managing any kind of transactions. Shared ontologies play a crucial role in supporting an adaptive architecture to save time and money and to increase its efficiency through B2B transactions.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
A New B2B Architecture Using Ontology and Web Services Technology
891
In the first part of this chapter, we argue that ontology for both business processes and product is needed to automate B2B exchange. In the second part, we give an overview about the most known standards of B2B architecture like EDIFACT, RosettaNet, ebXML, etc. Then, we define quickly the web services technology and its impact in the context of B2B exchange. The third part describes a new architecture that we propose and its three parts. The next section describes separately the different components of the system with an overview of objectives and functionalities of each part. Finally, the last part shows a case study of the developed platform using PLIB ontology in the context of a RosettaNet business protocol, this last one is developed using the web services paradigm. 2. Ontologies Requirements for E-Commerce Standards In spite of the promise that e-commerce holds, most implementing organizations have not received the full benefits from their B2B investments. This is because they use their proprietary solutions (backend systems) for modeling BP and for describing product catalogues. In addition, this information does not provide an agreement among all participants (Rousset et al., 2005). For this reason, a B2B e-commerce exchange requires some consensual decisions about all data shared and all services used in transactions. As suggested by several authors (Annie Becker, 2008; BIC, 2001; Trastour et al., 2003), it is necessary to have a meta-model that states the high-level architectural elements without getting into details of their implementations. Such a model will remain relatively stable over time, even if technologies, standards, and implementation details evolve (Souris, 2006). It also was allowed to split the work and to select a best-in-class solution for each facet of the global system in order to be adaptive, to ensure scalability and to provide a highly integrated organization. Ontology is being used to define this global view as an abstraction level sharing product catalogues and BP descriptions. Thus, product data and BP will be shared and have a same description for all participants in order to promote exchanges between them. Ontology deals with interoperability of business contents and of message exchanges among business systems of different enterprises. If it is clear that ontologies are necessary to be integrated in e-commerce architectures (Jasper and Uschold, 1999; McGuinness, 1999), many issues and challenges have to be addressed very carefully. The implementation of such a system technology is also a key management issue. It is a high-operation, cost-effective and complex process. According to this, different organizations and industry leaders must work cojointly in order to drive B2B standards, to define both content ontology and collaboration in its different implementations steps in order to reach an agreement about the shared ontologies. Therefore, on top of the structure of the exchanged message, two kinds of knowledge need to be modeled and shared across all the partners involved in B2B transactions: business process knowledge and product and service knowledge (Sounderpandian et al., 2007). Such shared conceptualizations are usually called ontologies. Thus, two kinds of standard ontologies have
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
892 Y. Aklouf
to be developed: ontology of products and services on one hand and ontologies of business processes on the other hand. With this motivation in mind, our objective is to define a generic architecture allowing the use of any BP interfacing it with any product catalogue content. This model could be applied in several areas (Aklouf and Drias, 2008). 3. Basic Concepts Several kinds of electronic commerce architectures are available. Among them, we find B2B standards. B2B means the commerce established by two or more businesses (in most cases between consumer and its providers). Electronic marketplaces for B2B bring together many online suppliers and buyers. They have to integrate many product catalogues provided by partners. Each partner or company can use its own format to represent products in its product catalogue. Section 4 depicts an overview of topics related to modern B2B standards in the form of a stack, including content, BP and registry standards as the most important layers. These elements are gathered in a new model proposed in Sec. 5. Therefore, if we intend to do business between partners in a B2B environment, it is necessary to have an architecture that defines and specifies each part or standard used in the system. Since the system is layered and for development purposes, we must ensure that our defined layers have minimal inter-dependencies. Therefore, it is recommended to use a defined standard for each layer promoting the orthogonality principle presented in Aklouf et al., (2005). Several standards to define B2B architecture in both horizontal and vertical ways shall be used. A vertical standard is a system that is specific to some activities or particular products, e.g. RosettaNet standard specific to semiconductors and electronic information commerce. A horizontal standard is a general system that defines exchange protocols and information formats without referencing any product or service, e.g. ebXML standard. The approach presented in this chapter is based on the application of the orthogonality described earlier to the RosettaNet convergence model explained in the next section. 4. The Layered Architecture of B2B Electronic Commerce This part focuses on the definition of a high-level architecture for B2B systems. It serves as a framework for both consumers and suppliers. In the framework described herein, the different elements of the defined B2B architecture can be represented as layers, each one built on top of the other, i.e. each layer supports the functions provided by the layer below it. 4.1. The RosettaNet Convergence Standard As shown previously, a B2B electronic commerce is a layered architecture. RosettaNet defines such a convergence model showing how different organizations and standards can be used efficiently in a common model. The convergence model of
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
A New B2B Architecture Using Ontology and Web Services Technology
893
Universal Business Processes Universal Technical Dictionary Structure
Dictionary Universal Business Dictionary Structure & Content Universal Registry & Repository Structure Universal Messaging Service
Figure 1. The layered architecture of B2B electronic commerce.
RosettaNet (2001a) is a document defining how the multiple initiatives (SOAP, ebXML, RosettaNet, etc.) are complementary with the aim of setting up a solution for B2B integration in a ‘supply chain’ problem. This initiative is an important contribution. Indeed, the RosettaNet standard focuses on the articulation of the defined XML-based standards. For RosettaNet developers, a B2B solution contains the layers presented in Fig. 1 in addition to specific layers of the trade associated to companies. A description of each element of the architecture is given below. 4.1.1. Universal business processes This layer specifies BPs that are applicable to a broad range of businesses, regardless of the industry for which the business operates or of the specific characteristics of the business. These processes cover several domains of activity that businesses engage in, such as collaborative product development, request for quote, supply chain execution, purchasing and manufacturing. A BP is a set of business rules, definitions of the roles of the parties involved and trigger events that provide the context of information exchange (Aberdeen Groups, 2001; BIC, 2001), e.g. invoicing processes, purchasing processes. In our work, we recommend the use of RosettaNet PIPs (2001c) and ebXML BPSS (2001) for this layer. 4.1.2. Universal technical dictionary structure (content-oriented ontology) This component manages the structure for defining form, fit, and function of any product or service, regardless of the industry that manufactures it. Specifically, the universal technical dictionary structure specifies the structure of the specialized technical dictionary or dictionaries used to obtain commonly used domain-specific terminology and accepted values for any product or service. The pre-defined structure is used as the basis for defining supply chain–specific technical dictionary content (i.e. form, fit, and function attributes). RosettaNet Technical Dictionary (RNTD) is an example of this kind of dictionary. PLIB ontology model Pierra et al. (2005) can be used to define technical product information.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
894 Y. Aklouf
4.1.3. Universal business dictionary structure and content The universal business dictionary structure and content specify the structure of payload (business information). It is an aggregation of all content fields, elements, constraints, code lists, and objects used within electronic BPs. It describes all of the business content, attributes, and relationships between elements that exist in all business documents. For example, needed field descriptors for a query of a price and availability would be: product quantity, product identification, global currency code, and monetary amount. 4.1.4. Universal registry and repository structure This layer defines both registry and repository. 1. Repository structure represents the standardized repository services that specify the structure, access protocol, and schemas for business content storage and retrieval. It includes terms, constraints, representations, etc. An example of such repository models is the RosettaNet dictionary repository or the ebXML Reg/Rep. 2. Registry services specify the structure and the access protocol of registries and repositories. It is accessed by trading entities to discover each other’s capabilities and services. It covers naming, directory, registry, privacy, authorization, and identification services. The registry in this layer is used to publish and register BPs and services. An example of such registries is the ebXML registry or the Universal Description, Discovery and Integration (UDDI). 4.1.5. Universal messaging service This layer defines a standardized message and envelope structure and layout definitions, with their specific technical purposes. It addresses the need to record session and communication settings for message transport in order to enable coordination between parties in a business transaction, including parameters that control reliable messaging, secured messaging, etc. In this layer, the most used specification is RosettaNet RNIF1.1 (2001b), SOAP, and ebXML MS (2001). Adopting this architecture eases the use, in any layer, of the most used standard and the best technologies. Proceeding this way preserves our defined orthogonality principle put into practice in the next sections. 5. Overview of the Proposed B2B Model This part of the chapter presents our proposal of a B2B model. An organization of a new B2B system is suggested (Aklouf and Drias, 2008). It represents our point of view through a stack of three layers or levels (see Fig. 2). The role of each layer is to deliver the necessary information needed by other layers. The
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
A New B2B Architecture Using Ontology and Web Services Technology
895
Figure 2. The proposed B2B model.
first task is about the discovering and localization of the partner of exchange. This activity is devoted to the business process layer. Once the partner is localized and different software actions are processed, the next step deals with the exchange performed by the two remaining layers. One layer, the business process layer, defines the scenario of exchange, or the order of operations triggered by partners. The second layer defines the payload or the content that actually represents the needed information. The integrated framework for B2B business model with the major structured components and interactive relationships is illustrated in Fig. 2. These three levels or layers represent three concepts and can be viewed as open issues for which answers shall be provided. A proposition for each level is given by the proposed architecture. To communicate and exchange information and services between partners using this new B2B model, it is necessary to describe precisely the discovering layer, the business process layer and the content or product ontology layer with details about their implementation. So, doing commerce using this architecture starts as follows: 1. The customer tries to use the discovering layer (universal registry and repository structure layer in Fig. 1) to retrieve a partner (supplier). 2. Once this partner (supplier) is found, a collaboration agreement is established using the BP layer (universal business process layer in Fig. 1). 3. We start exchange using a content format accepted by both actors of the agreed exchange (technical and business dictionary layers in Fig. 1). 5.1. Objectives of the Proposed Architecture The objective of the suggested model is to set up an adaptive architecture which can be used as a horizontal or a vertical system. A vertical standard is a system which is specific to some kind of activities or some particular products. For instance, RosettaNet is a standard specific to semiconductors and electronic information commerce. A horizontal standard is a general system which defines exchange protocols and information formats without referencing any product or service. The
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
896 Y. Aklouf
standard ebXML is an example of a model which proposes generic and standardized services for the most industry branches and which can be adapted to particular fields and contexts. This model provides also a collaborative environment allowing industrial managers, the consortia and the developers of standards, to work cojointly or in collaboration in order to have an effective and a reliable exchange system in which the integration of the companies is done with lower cost. The main objectives of this architecture are: • The description of an infrastructure that proposes an intelligent module for discovering and localizing partners who propose services and eBusiness catalogues: the most known standards are UDDI (Dogac et al., 2002) and the ebXML registry. • The proposition of a business process ontology model based on the existing standards such as PIP of RosettaNet (RosettaNet, 2001c), ebXML BPSS (ebXML, 2005), BPMN (Business Process Management Notation), BPEL (Business Process Execution Language), and so on. • And finally, the integration of the existing industrial catalogues of components describing objects or services from several industrial sectors. Among them we find RNTD, RNBD of RosettaNet (RosettaNet, 2001b), PLIB (ISO 10303-11, 2004), and so on. Our principal focus is on the definition of an open architecture allowing the integration of different technologies and knowledge coming from heterogeneous communities. This architecture requires the modeling of business processes adaptable to the needs and the requirements dictated by the specificity of these exchanges.
5.2. Architecture Functionalities This model presents a set of useful functionalities: 1. The possibility of adding new functionalities to the system as a product catalogue model or as a separately developed dictionary 2. The possibility of managing applications and data locally or remotely 3. The factorization of a set of knowledge as standards and business rules useful for the various parts of the system 4. The possibility of modifying the topology of the model, following the adoption of new standards or the implementation of a new tool (ensure evolution of the system — scalability) 5. The flexibility of the model accepting the adhesion of new partners without modifying the architecture 6. The possibility offered by the loosely coupled aspect of the model, which offers the possibility to take into account competences and tools proposed by partners in order to be used by the defined architecture (Aklouf et al., 2003)
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
A New B2B Architecture Using Ontology and Web Services Technology
897
5.3. Discovering Layer The first question is to find the partners with whom information is exchanged, i.e. who is the company that provides exactly what is needed by a consumer. A mechanism where business documents, information and relevant metadata can be registered and retrieved as a result of a query must be available. Businesses can discover, interact, and share information between each partner involved in the exchange. Before making exchanges, it is necessary to find in a more or less specific way the partners involved in the exchange process. It is necessary to use or build a registry that includes a more detailed BP and specific information about services. Using this registry, companies will have a standard method to exchange business messages, conduct trading relationships, communicate data in a common way, define and register BPs. Directories, such as yellow pages, UDDI, and recently its ebXML registry, are provided for this purpose. 5.3.1. Objectives of the discovering component The initial goal of this module is to provide a shared and universal place where each company, whatever its size, its sphere of activity, and its localization might be, could publish its services and find and localize in a dynamic way the required services. This vision has not yet materialized. There are, however, some concrete uses of standards such as UDDI and ebXML. Currently, it is about limited use within a restricted and controlled framework: intranets. The fact of indexing the various services offered by the same organization is mainly important if this organization wishes that its services will be used and shared. The importance of having a central directory is to guarantee discovery of the services provided by an organization if they are published (it is like a specialized research engine for organization profiles and services). It is exactly the same case for a web page which will not be visited if it is not referred to in a search engine. In other words, this component makes it possible to determine how company information and services must be organized in order to allow the community which divides this directory to have access. This directory or repository must provide a general schema and each company must publish its profile like its services according to this schema. The repository must support the following principle bases: • The access to the directory and the suggested schema is free, without control and limits. • The access to the data directory is through contextual research or through a hierarchical key organization. • The management of the repository is ensured by one or more organizations. • The possibility of having broker space for receiving submissions before the publishing and validating the data is offered. This space makes it possible to correct and modify information with some errors.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
898 Y. Aklouf
5.4. Business Process Layer Once the exchange partner is located, it is necessary to agree on various exchange steps or exchange scenarios named: BP. A BP represents an ordered set of interactions and an interaction is a basic exchange of messages between partners. Interaction types are specified in a declarative way by means of pre-defined constructs supporting common interaction patterns. For example, a typical action pattern is the request/response pattern (one partner sends a purchase order request and the other responds with an acknowledgement, meaning that the order is effective). Most of the BPs are represented by UML models using state, collaboration, and/or sequence diagrams or by XML schemas to specify binary and multiple collaborations between partners. Currently, the developers aim at creating a generic meta-model for BPs allowing modeling any computersensible BP. 5.4.1. Objectives of the BP layer The role and all tasks of the company are represented by a set of processes. These processes are in close connection with the components of the information system. For this purpose, the integration of the various applications must be implemented correctly. However, the objective is the piloting of the architecture of the exchange model by business processes. This part of the model is fundamental because it is the basis of autonomy and the agility given to the company in the evolution of its information system. The assembly and the dismantling of the business functions are made available to users from the system without having to call upon the usual supervision of the technical managers of the information system. The goal desired by including various tools and business process models in this layer is to provide a generic and a flexible architecture. The characteristic of this multilevel model is in the integrality and the homogeneity of the tools and methods which are provided to create the necessary business processes. These tools will be discussed in the case study section. 5.5. The Content Layer The content layer specifies the structure and semantics of information items, including their possible refinement or composition and various constraints like cardinality. Ontologies and PLM systems characterize this level. Two types of ontology information may be found in this layer: the business-oriented ontology identifies business properties that are independent of any particular product and needed to define transactions between trading partners and the productoriented ontology that represents products by means of a set of classes, properties, domains of values and instances of objects. It provides a shared common meaning of the products that can be used by two or more partners during exchange
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
A New B2B Architecture Using Ontology and Web Services Technology
899
(Tellmann and Meadche, 2003; Trastour et al., 2003). For example, several dictionaries can be used in this layer. RosettaNet uses two separate dictionaries: one to represent business ontology and the other to represent product ontology (Jasper and Oschold, 1999; McGuinness, 1999). PLIB standard ontologies can be used to define both business and product ontologies either in single or in separate dictionaries. 5.5.1. Objectives of the content ontology The content management in companies and organizations became a major requirement these last years. From an architectural point of view, a content management system intended to be used by several heterogeneous infrastructures requires two sub-systems. The first is backend to the company and the second is shared by all the actors. This last is called ontology in our case. Indeed, the essential objective of the integration of ontologies in the exchange model is to ensure a reliable and a consensual sharing and managing of data during exchanges. This is provided by ontologies since they allow a mature product characterization by ensuring a unique identification of the concepts and their attributes. This identification ensures a nonambiguous concept referenced by various partner’s applications. Thus, the model integrating ontologies will be characterized by the possibility of separating the contents from the container (process) allowing a separate development on each level in a flexible way. Once the tools and the environment to be used in each layer are defined, the integrated or the complete system defines the operational B2B model in which transactions between partners can be started and performed automatically. 6. Case Study Our work proposes the implementation and the realization of a B2B platform based on the three layers shown in Sec. 5. The next is to give propositions to the three previously outlined levels and give a practical solution implementing these different levels. 1. In the discovering layer of our system, an ebXML registry is developed for this level. 2. In the business process layer, a web service based on the PIP2A9 RosettaNet BP for product information query is implemented. 3. The content layer uses a PLIB (Pierra, 2000) model with its various tools (PLIBEditor and PLIBBrowser) to search and retrieve the product information content. The following sections describe in more detail the several parts of the architecture.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
900 Y. Aklouf
6.1. ebXML Registry The ebXML registry is central to the ebXML architecture (Kappel and Kramler, 2003). The registry manages and maintains the shared information as objects in a repository. Repositories provide trading partners with shared business semantics, such as BP models, core components, messages, trading partner agreements, schemas and other objects that enable data interchange between companies. The ebXML registry is an interface for accessing and discovering shared business semantics. In this section, we explain the registry usage, the business semantic model, and the registry functionality including registry classification, registry client/server communications, searching for registry objects, and managing registry objects. Our registry implementation is based on the information in primary ebXML registry reference documents, including the ‘ebXML Registry Service Specification (2002)’ and the ‘ebXML Registry Information Model (2002)’. In a marketplace populated by computer companies with proprietary hardware, operating systems, databases and applications, ebXML gives business users and IT groups control over their lives. The ebXML registry is not bound to a database product or a single hardware vendor. It is designed to operate on all kinds of computers. An ebXML registry serves as the index and application gateway for a repository to the outside world. It contains the API that governs how parties interact with the repository. The registry can also be viewed as an API to the database of items that support e-business with ebXML (Gerstbach, 2006). Items in the repository are created, updated, or deleted through requests made to the registry. 6.1.1. How the registry works The complete registry system consists of both registry clients and services. All ebXML registry implementations have to support a minimal set of interfaces and corresponding methods as standard interface. The server-side interfaces are registry service, object manager, and object query manager interfaces. The client-side interface is the registry client interface. The registry applications can be written in a variety of programming languages, such as Java, Visual Basic, or C++. In our case, the Java language (Hunter and Grawford, 2002) is used with some additional tools: JAXR and JAXM APIs, respectively, named Java XML APIs, for registry and for messaging (Brydon et al., 2004). There are a few generic objects in the registry design: registry entries, packages, classification nodes, external links, organizations, users, postal addresses, slots (for annotation purposes), and events (for auditing purposes). The registry entry is a basic registry unit for submitting and querying registry objects, containing a crucial reference to the actual document or data. Every repository item is described by a registry entry. For example, a Collaboration Profile Protocol (CPP), a document defining an enterprise profile, is stored in the repository. It has a registry entry that allows finding the actual CPP stored in the repository.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
A New B2B Architecture Using Ontology and Web Services Technology
901
6.2. Web Service for the BP Layer Web services are modular, self-describing applications that can be published and located anywhere on the web or on any local network. The provider and the consumer of the XML web service do not have to worry about the operating system, the language environment, or the component model used to create or to access the XML web service, as they are based on ubiquitous and open internet standards, such as XML, HTTP, or SMTP. An initiative from Microsoft and IBM to describe the messages between clients and the web server, WSDL (Web Service Description Language) (Brydon et al., 2004), describes and defines web services. It helps the user to set up a system using a service ranging from connection details to message specification. A WSDL document defines services as a set of network endpoints (ports) that is associated with a specific binding. This binding maps a specific protocol to a port-type composed of one or more operations. In turn, these operations are composed of a set of abstract messages, representing the data. The pieces of data in a message are defined by types. 6.3. Designing our PIP2A9 Web Service This section presents mechanisms and the approach used to design the web service using the PIP2A9 as BP. First, the PIP2A9 role and tasks are introduced; after that, some details about the matching between PIP2A9 and the web service are outlined. RosettaNet aims to align the BP of supply chain partners. This goal is achieved by the creation of PIPs. Each PIP defines how two specific processes, running in two different partners’ organizations, will be standardized and interfaced across the entire supply chain. PIP includes all business logic, message flow, and message contents to enable alignment of the two processes. RosettaNet defines more than 100 PIPs. The purpose of each PIP is to provide common business/data models and documents enabling system developers to implement RosettaNet eBusiness interfaces. The PIP studied in this chapter is the PIP2A9 (Query Technical Product Information) (2001c). Technical product information is the category of information that describes the behavioral, electrical, physical, and other characteristics of products. There are numerous classes of customers within the supply chain that need to be able to access product technical information. These include distributors, information providers (such as web portal companies, other commercial information aggregators and end-customer information system owners), engineering, design engineering, manufacturing, and test engineering. 6.3.1. Matching between business process and web services Web services are modular, self-describing applications that can be published, located anywhere on the web or on any local network. The provider and the consumer of the XML web service do not have to worry about the operating system,
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
902 Y. Aklouf
language environment, or component model used to create to access the XML web service. They are based on ubiquitous and open internet standards, such as XML, HTTP, and SMTP. An initiative from Microsoft and IBM to describe messages between client and the web server, WSDL, describes and defines web services. It helps the user to set up a system using a service ranging from connection details to message specification. A WSDL document defines services as a set of network endpoints (ports) that are associated with a specific binding. This binding maps a specific protocol to a port-type composed of one or more operations. In turn, these operations are composed of a set of abstract messages representing the data. The pieces of data in a message are defined by types (Spek, 2002). Due to the similarity between BPs and web services and to the small gap between them, a mapping between these two technologies is defined. The result of this matching is shown in Fig. 3. Both BPs and web services make use of messages containing documents. They perform the same function in both theories. An atomic activity from a BP workflow corresponds to a single web service operation. An activity is wrapped into a process and, therefore, it is more complicated than an atomic activity. With the web services, exchange operations are between entities: processes and the system (e.g connection details). In ebXML these entities are companies defined by CPPs, but in RosettaNet they are covered by internal processes that communicate with other internal processes defined in another place. In order to match with these technical information, web service side ports and bindings are used. Finally, the concept of service is defined as a collection of operations (port types), which are logically related by relationships other than a sequence like in a workflow. Here, no corresponding concept in BPs exists.
Business Process (PIP)
Web services Service
Internal Process
Port Binding
Process
Port Type
Activity
Operation
Message
Message
Document
Document
Business Rules
Workflow Language
Business Choreographies
Figure 3.
Matching between business process and web service.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
A New B2B Architecture Using Ontology and Web Services Technology
903
6.4. RosettaNet Business Process This section presents the mechanisms and the approach used to design the web service using the BP from RosettaNet named PIP2A9. First, the PIP2A9 role and tasks are introduced, then some details about the matching between PIP2A9 and web service are outlined. RosettaNet aims to align the BP of supply chain partners. This goal is achieved by the creation of Partner Interface Processes or PIPs. Each PIP defines how two specific processes, running in two different partners’ organizations, will be standardized and interfaced across the entire supply chain. PIP includes all business logic, message flow, and message contents to enable alignment of the two processes. RosettaNet defines more than 100 PIPs. The purpose of each PIP is to provide a common business/data model and documents, enabling system developers to implement RosettaNet e-Business Interfaces. The PIP studied in this article is the PIP2A9 (Query Technical Product Information) (RosettaNet, 2001c). The technical product information is the category of information that describes the behavioral, electrical, physical, and other characteristics of products. There are numerous classes of customers within the supply chain that need to be able to access product technical information. These include distributors, information providers (such as web portal companies, other commercial information aggregators, and end-customer information system owners), engineering, design engineering, manufacturing, and test engineering. As mentioned previously, our work proposes a definition of a B2B architecture followed by an implementation of this platform based on the three technologies shown in the above sections. The next step uses these standards cojointly to provide a secure, reliable and interoperable architecture. Each partner providing a service must affect a URL to its BP. This URL is stored in a registry (UDDI or ebXML registry). In our case an ebXML registry is developed for this purpose (Glushko et al., 1999). The BP will be discovered and retrieved as a set or a unique web service. The activities that may be undertaken during this step are: 1. The use of a RosettaNet BP (in our example, the PIP2A9 is a BP for product technical information query) 2. The integration of PIP2A9 in the ebXML BP model 3. The development of a web service based on the resulting BP 6.5. Integrating PIP2A9 in the ebXML BPSS Model The next task will be the integration of the resulting PIP2A9 in the ebXML BP. This task will be achieved without any error if the mapping between RosettaNet PIP and the corresponding part of the ebXML BP is given correctly. A PIP corresponds to a binary collaboration or exactly to a business transaction in the BPSS specification. A business transaction in ebXML is defined by a business transaction activity with document flow exchange based in general on a request and a response document.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
904 Y. Aklouf
Content (Instances) ebXML Business Process Business Document
Business Document Description of exchanged Documents
Description of partner’s roles
Description of transactions
Business Document ...
Business Document
Description of collaborations
Collaboration 1
RosettaNet (PIP)
Collaboration 2 ……..
RosettaNet (PIP)
Collaboration n
RosettaNet (PIP)
Figure 4. An integrated ebXML and PIP business process.
Therefore, each PIP in the RosettaNet model will be integrated in the BPSS model as a business transaction activity. Figure 4 shows how the integration is realized using ebXML and RosettaNet standards in the same process model. An ebXML business process is defined by a set of documents exchanged on collaborations. Collaborations represent PIPs, and the business documents define the contents. This model will be implemented using the web services technology (see Sec. 6.2) according to the transformation realized as shown in Fig. 3. Figure 3 shows the description of the BP (part A) as a web service via WSDL document (part B). A PIP represents a document exchange as a request/response message. Each communication is described by a document which defines content for both request and response. 6.6. The PLIB Ontology in the Content Layer The third and the last level of our architecture is the content layer. As shown previously, this layer defines the data and product information for which the use of PLIB ontology model is proposed. PLIB was launched at the ISO level in 1990. Its goal is to develop a computer-interpretable representation of part library data to enable a full digital information exchange between component suppliers and
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
A New B2B Architecture Using Ontology and Web Services Technology
905
users. A PLIB data dictionary (ISO 13584-42, 2004; Schenck and Wilson, 1994) is based on the object-oriented concepts (Coad and Yourdon, 1992): components are gathered in part families that are represented by classes. This set of classes is organized in a simple hierarchy (on which factorization/inheritance applies) of classes. Such classes are then precisely described (textually, with technical drawings, etc.). Finally, each class is associated with a set of technical properties, also precisely described (domain of values, possible measurement unit, etc.). A basic idea of the definition of a PLIB dictionary is that properties and classes shall be defined simultaneously: applicable properties allow defining precisely a part family, and conversely, a part family determines the meaning of the property in its particular context. The modeling formalism used in PLIB is the EXPRESS (ISO10303-11, 2004) language. The ontology model of PLIB is formally defined and a number of tools have been developed to create, validate, manage, or exchange ontologies. They are available on the PLIB server (http://www.plib.ensma.fr) at LISI/ENSMA. The basis of all these tools is a representation of ontology in a processable exchange format. It can be exchanged in various automatically generated formats: XML document (SimPLIB DTD), associated possibly with XSL page (SimPLIBVIEWER), EXPRESS physical file, or DHTML document (PLIBBrowser). Finally, PLIBEditor makes it possible to create and publish an ontology (Schenck and Wilson, 1994; Westarp et al., 1999). PLIBEditor is a tool that allows users define and represent graphically and simultaneously a defined ontology and its instances. It is an application containing two parts on its user interface, the left part (frame) allows the definition of classes, properties, and relations between them, and the right part defines the ontology population (instances). In another way, PLIBEditor defines in the same application data and metadata of ontology. PLIBBrowser has the same role as PLIBEditor, but it has a web oriented presentation. 7. The Developed Platform The objective of the suggested model is to set up an adaptive architecture which can be used as a horizontal or a vertical system. A vertical standard is a system which is specific to some kind of activities or some particular products. For instance, RosettaNet is a standard specific to semiconductors and electronic information commerce. A horizontal standard is a general system which defines exchange protocols and information formats without referencing any product or service. The standard ebXML is an example of a model which proposes generic and standardized services for the most industry branches and which can be adapted to particular fields and contexts. This model also provides a collaborative environment allowing industrial managers, the consortia and the developers of standards, to work cojointly or in collaboration in order to have an effective and a reliable exchange system in which the integration of the companies is done with lower cost.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
906 Y. Aklouf
Figure 5. The resulting infrastructure.
The main objectives of this architecture are: — The description of an infrastructure that proposes an intelligent module for discovering and localizing partners who propose services and eBusiness catalogues: the best known standards are UDDI (Dogac et al., 2002) and the ebXML registry (1 in Fig. 5). — The proposition of a business process ontology model based on the existing standards such as PIP of RosettaNet (RosettaNet, 2001c), ebXML BPSS (ebXML, 2005), BPMN (Business Process Management Notation), BPEL (Business Process Execution Language), etc. (2 in Fig. 5). — Finally, the integration of the existing industrial catalogues of components describing objects or services from several industrial sectors. Among them we find RNTD, RNBD of RosettaNet (RosettaNet, 2001b), PLIB (ISO 10303-11, 2004), etc. (3 in Fig. 5). Our principal focus is on the definition of an open architecture allowing the integration of different technologies and knowledge coming from heterogeneous communities. This architecture requires the modeling of business processes adaptable to the needs and the requirements dictated by the specificity of these exchanges.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
A New B2B Architecture Using Ontology and Web Services Technology
907
8. Conclusion The chapter shows that in most exchange architectures, two categories of ontologies are used: ontologies of products and services and ontologies of business processes. Involving these two kinds of ontologies simultaneously in a business transaction enables a secure, automatic, and reliable B2B exchange. Based on these two kinds of ontologies and on the standards mentioned above, a new architecture is proposed. The use of ontology to describe content provides a uniform representation of data in a common and shared format. Moreover, we have shown how B2B architecture can be abstracted and represented by an adaptive infrastructure based mainly on three layers. These three parts define the major concepts required for exchange between companies. Furthermore, the article has shown the need for having a common shared ontology for business process used between partners. We have developed a web service platform as kernel for BP which was integrated to the proposed architecture in order to allow independence and scalability. Our case study presents the possibility to integrate in the same architecture several parts of several standards simultaneously. For example, we have used ebXML and RosettaNet to design the BP ontology, PLIB dictionary to describe the content ontology and in the last part, the ebXML registry as a directory to localize and retrieve services and data of partners. For the future, we suggest developing a new collaboration model (Khelifa et al., 2008), to be added to the architecture based on the Grid Services technology and also studying the possibility to integrate other ontology formalisms like OWL-S and DAML-S to our platform. References Aberdeen Group (2001). The fast track to positive e-business ROI. Trading Community Management, June. Aklouf, Y and H Drias (2007). Business process and Web services for a B2B exchange platform. International Review on Computers and Software (IRECOS). Prize Worthy Prize. Aklouf, Y, G Pierra, Y Ait Ameur and H Drias, (2003). PLIB ontology for B2B electronic commerce. 10th ISPE International Conference on Concurrent Engineering: Research and Applications. Carlton Madeira Hotel, Madeira Island – Portugal, July 26–30. Aklouf, Y, G Pierra, Y Ait Ameur and H Drias (2005). PLIB ontology: A mature solution for products characterization in B2B electronic commerce. Special Issue: E-Business Standards. International Journal of IT Standards and Standardization Research, 3(2). Aklouf, Y and H Drias (2008). An adaptive e-commerce architecture for enterprise information exchange. International Journal of Enterprise Information Systems 4(4), 15–33. Annie Becker, S (2008). Electronic Commerce; Concepts, Methodologies, Tools and Applications; 4v. edited by IDEA GROUP PUB. February. BIC (2001). XML convergence workgroup: High-level conceptual model for B2B Integration. Business Internet Consortium, Version: 1.0, October 5.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
908 Y. Aklouf
Brydon, S, G Murray, V Ramachandran, I Singh, B Streans and T Violleau (2004). Designing Web services with the J2EETM 1.4 Platform: JAX-RPC, SOAP, and XML Technologies. Sun Microsystems, January. Coad, P and E Yourdon (1992). Object-oriented Analysis (Englewood Cliffs, NJ: Prentice Hall, 1992). Dogac, A, I Cingil, GB Laleci and I Kabak (2002). Improving the functionality of UDDI registries through Web service semantics. Proceedings of 3rd VLDB Workshop on Technologies E-Services (TES-02), Hong Kong, China, August. ebXML Registry Information Model Schema (2002). Retrieved from http://www.ebxml. org/specs/ebRIM. pdf. ebXML Registry Services Specification Version 2.1 (2002). Retrieved from http://www. ebxml.org/specs/ebRS.pdf. ebXML (2001). ebXML technical architecture specification. May 11. Retrieved from http://www.ebxml.org. ebXML (2005). Business process team. Business Process Specification Schema v2.0 February. Gerstbach, P (2006). ebXML vs. Web Services Comparison of ebXML and the Combination of SOAP/WSDL/UDDI/BPEL. http://www.gerstbach.at/2006/ebxml-ws/ebxmlws.pdf. Glushko, RJ, JM Tenenbaum and B Meltzer (1999). An XML framework for agent-based e-commerce. Communications of the ACM, 42(3). Hunter, J and W Grawford (2002). Servlet java guide du programmeur, Edition O’REILLY. ISO 10303-11 (2004). Industrial automation systems and integration — Product data representation and exchange — Part 11: Description Methods: The EXPRESS Language Reference Manual. ISO 13584-42 (2004). Industrial automation systems and integration. Parts library — Methodology for structuring Parts Families, ISO, Geneva. Jasper, R and M Uschold (1999). A framework for understanding and classifying ontology applications, in B Gaines, R Cremer and M Musen (eds.) Proceedings of 12th Int. Workshop on Knowledge Acquisition, Modeling, and Management KAW’99 (16–21 October 1999, Banff, Alberta, Canada), volume I, pages 4-9-1–4-9-20. University of Calgary (Calgary: SRDG Publications). Kappel, G and G Kramler (2003). Comparing WSDL-based and ebXML Based Approaches for B2B Protocol Specification. Martin Bernauer, Business Informatics Group. Vienna University of Technology, Austria. Khelifa, L,Y Aklouf and H Drias (2008). Business process collaboration using Web services resource framework. In Proceedings of International Conference on Global Business Innovation and Development (GBID), Rio de Janeiro, Brazil, 16–19 January, 184–192. McGuinness, DL (1999). Ontologies for electronic commerce. Proceedings of the AAAI ’99 Artificial Intelligence for Electronic Commerce Workshop, Orlando, Florida, July. Peltz, C and J Murray (2004). Using XML schemas effectively in WSDL design. Proceedings of the Software Development Conference and Expo, March. Pierra, G (2000). Repr´esentation et e´ change de donn´ees techniques, Mec. Ind., 1, 397–400. Pierra, G, H Dehainsala, Y Ait-Ameur and L Bellatreche (2005). Base de Donn´ees a Base Ontologique: Principes et mise en oeuvre. Ing´enierie des Syst`emes d’Information — ISIHeam`es, 91–115.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
A New B2B Architecture Using Ontology and Web Services Technology
909
Randy, EH (2001). W3C Web service position paper from Intel. W3C Workshop, April 12. RosettaNet (2001a). RosettaNet architecture conceptual model, July. RosettaNet (2001b). RosettaNet implementation framework: core specification. Version: Validated 02.00.00, July 13. RosettaNet (2001c). Specification: PIP specification cluster 2: product information, segment A: preparation for distribution, PIP2A9: query technical product information. Validated 01.01.01, November 1. Rousset, MC, A Froidveaux, H Gagliardi, F Goasdeau, C Reynaud and B Sfar (2005). Constructions de m´ediateurs pour int´egrer des sources d’information multiples et h´et´erog`enes. Picsel Revue, pages 13, 2(1), 9–59. Schenck, D and P Wilson (1994). Information Modelling the EXPRESS Way (Oxford University Press). Spek, AWA (2002). Designing Web services in a business context. Master Thesis, University of Van Tilburg. Center of Applied Research, September. Souris, P (2006). L’´echange de donn´ees informatis´ees. LanDesk Solution. http://www. declic.net. Sounderpandian, J and T Sinha (2007). E-business Process Management Technologies and Solutions. Published by Idea Group Pub. 322 pages. ISBN:1599042061. Tellmann, R and A Meadche (2003). Analysis of B2B standard and systems. SWWS, Semantic Web Enabled Web Services. Trastour, D, C Preist and D Colemann (2003). Using semantic Web technology to enhance current business-to-business integration approaches. Proceedings of the Seventh IEEE International Entreprise Distributed Object Computing Conference (EDOC’03). United Nations (1999). UN/EDIFACT-directory. Retrieved from http://www.unece.org/ trade/untdid. Westarp, FV, T Weitzel, P Buxmann and W K¨onig (1999). The status quo and the future of EDI — Results of an empirical study. Proceedings of the European Conference on Information Systems (ECIS’99).
Biographical Note Youcef Aklouf (PhD) is an Associate Professor at the Computer Science department of the University of Science and Technology Houari Boumediene (USTHB), and is a member of the Artificial Intelligence Laboratory. He got his PhD from the USTHB University in April 2007 and from the University of Poitiers France in June 2007 where he was a member of the data engineering team of LISI/ENSMA. Aklouf received his engineering and MS degrees in computer science from the University of Science and Technology of Algiers (USTHB) in 1998 and 2002, respectively. He also teaches several courses at USTHB: compiling, databases, algorithmic, webprogramming tools, operating systems, etc. His research areas include: e-commerce, business-to-business, web-services, ontology and multi-agent systems, and Grid services.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch36
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Chapter 37
The Roles of Computer Simulation in Supply Chain Management HONGYU JIA and PENG ZUO Economics and Management College, Dalian Maritime University, P.C. 116026, Linghai Road 1, Dalian, Liaoning Province, P.R. China E-mail: [email protected]; zuopeng [email protected]
Supply chain information is crucial in supporting supply chain management (SCM) and decision making. Computer simulation is an applicable approach for modeling supply chain system, which includes a great deal of complicated and uncertain elements. This chapter focuses on introducing a simulation approach to model supply chain business processes, especially information flow, material flow and capital flow, and explaining how to use simulation results to support supply chain decisions. Keywords: Computer simulation; supply chain management; simulation modeling.
1. Introduction Over the last few decades, business environments have been changing from mass production to customization and from technology and product-driven to market and customer-driven. Leading industries worldwide are placing focus on integrating, optimizing, and managing their entire supply chain from component sourcing, through production, inventory management, and distribution to final customer delivery. Hence, the term supply chain management (SCM) gained popularity and the increased competition and a trend in global sourcing have forced companies to improve the coordination of business functions across partners within the supply chain. Since problems in SCM encompass a large variety of decisions regarding logistics network configuration, inventory planning and control, transportation analysis, etc., information technology are vital components and important enablers for supply chains to make material, information, and financial flows run more efficiently. The abundance of data and the savings that arise due to sophisticated analysis of these data are factors that have initiated the current interest in SCM. Many problems in SCM are too complex to be handled by humans but not so well-defined that they could be entirely performed by computers; decision support 911
March 15, 2010
912
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
systems are designed to assist the human decision maker. These systems implement, e.g., analytical tools using methods of operations research, simulation, and flow analysis. In this chapter, computer simulation technology is introduced as a decision support tools to help supply chain managers to analyze supply chain business processes performances and to support supply chain decisions. The organization of this chapter is as follows. The second section discusses where computer simulation technology could be used in SCM. Computer fundamental concepts and methodology will be introduced in Sec. 3. The following section promotes the detail requirements for supply chain simulation modeling. Section 5 lists a survey about simulation tools referred in research papers, and gives a brief introduction of simulation software. A supply chain simulation case is presented in Sec. 6. Section 7 presents some latest research and new technologies for simulation modeling.
2. Simulation Opportunities in SCM There are many definitions for SCM. For example, Christopher (1998) defined SCM as “the management of upstream and downstream relationships with suppliers and customers to deliver superior customer value at less cost to the supply chain as a whole.” From this definition, SCM should integrate all the activities within the supply chain into a seamless process. Flow in a supply chain typically includes aspects of purchasing, manufacturing, capacity planning, operations management, production scheduling, manufacturing requirements planning, distribution system planning, transportation systems, warehousing and inventory systems, and demand input from sales and marketing activities. It can be from a single product line or production plant to a complex chain that includes multiple demand sources, suppliers, distribution means, or factories. And on the other hand, supply chain enterprises all have to face constantly-changing business internal or external environmental conditions, such as constantly changing operations (production line, factory, warehouse, and distribution systems opening/closing/expanding/contracting), complex systems integration, changing inventory policies, increasing customer expectations, demands for higher profits, and many more. It brings quite complexities for SCM due to the interdependencies in a supply chain and the competitive environment (Chatfield et al., 2008). Simulation modeling is a suitable tool for analyzing the supply chains. Its capability of capturing uncertainty and complexity makes it attractive for the purpose. By this, it is meant that simulation can model the corporate dynamics in the complete supply chain, from source to the user in throughout the plan, source, make, and deliver process. This is mainly due to the statistical nature of simulation tools. The output is both statistical data and performance animation to view the system dynamics. Simulation can reliably capture and predict the effect of multiple changes in corporate systems.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 913
From supply chain lifecycle perspective, this section will describe opportunities for simulation to play roles in SCM. 2.1. The Use of Simulation in Supply Chain Reengineering and Designing Supply chain business processes reengineering and cooperation integrating has been hotspots in SCM field for many years. Many large enterprises reengineered their global supply chain, such as Wal-Mart and IBM, to achieve quick responsiveness to their customers with minimal inventory. To support this effort, IBM (Lin et al., 2000) developed a series supply chain analysis tool, which was later made an operational supply chain simulator. In general, during the supply chain design phase, simulation can be used to evaluate different configurations of a supply chain. These can include those formed with different manufacturing and logistics companies as supply chain members and those with different locations of manufacturing and distribution facilities. The models can also support evaluation of different product configuration postponement decisions. More often than any of the preceding decisions, the supply chain partners may use simulation for establishing desired inventory levels at different stages of the supply chain such that the performance can be maintained within an established range in face of inherent uncertainties. The partners can set service level goals and determine the inventory levels at successive stages that will allow them to achieve the goals based on the uncertainties at each of the stages. Here are some opportunities for simulation in supply chain design or business integrating phase: • Perform simulation and scenario analysis to validate existing supply chain to identify the shortcomings and opportunities for redesign. • Investigate the impact of changes in major demand changes on supply chain components. • Investigate the impact of new and innovative ways of setting up and operating as large supply chain. • Investigate the impact of eliminating an existing or adding a new infrastructure component to an existing supply chain. 2.2. The Use of Simulation in Supply Chain Operation Controlling Supply chain simulation models can provide valuable support during the operational phase of the supply chain. The use of simulation during this phase can support supply chain planning and scheduling and supply chain execution. Simulation can be used in a generation and an evaluation role for establishing the production and logistics plans and schedules to meet long- and short-term demands. For supply chain execution, simulation can be used for event management by evaluating alternative courses of action available on the occurrence of an interruption. Distributed simulation (DS) is well suited for this role as it allows speedy execution and allows supply
March 15, 2010
914
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
chain partners to maintain their proprietary data (Gan et al., 2000). The following list provides some key opportunities for supply chain operation management: • Investigate the impact of changing operational strategies in a supply chain, due to major shifts in products, processes, location, and use of facilities, etc. • Investigate the impact of production or outsourcing strategies, based on the tradeoffs between inventory impact and other supply chain metrics. • Investigate the relationships between suppliers and other critical components of a supply chain by rationalizing the number and size of supply points based on total costs, quality, flexibility, responsiveness, etc. • Investigate inventory replenishment policies. 2.3. The Use of Simulation in Product Service During the supply chain termination phase, simulation models can be used for evaluating product phase out and phase in plans. Alternative plans for “emptying the pipeline” can be evaluated. The plan for phased shut down of manufacturing and distribution facilities can be evaluated for its volume and cost impact. The ability of the supply chain to provide service parts for maintenance and repair operations can be verified and the cost impact quantified. Each of these stages individually is an area of opportunity where a simulation model can be used to test different scenarios and consider multiple solutions quickly for purposes of resource capacity planning, bottleneck analysis, cycle-time reduction purposes (cost cutting), etc. SCM is an important competitive advantage for companies these days. It has become a matter of survival that many companies improve their supply chain efficiency. This presents an opportunity for simulation playing a role as decision support for management to perform “what-if” analysis and time tested. Simulation for supply chains is currently being used by companies all over the world with much success. The opportunities to cut costs are tremendous. Such simulation application succeed cases could be found on many simulation software vendors Web sites. 3. Introduction of Computer Simulation This section explains computer simulation conception, the process of modeling and some simulation modeling methods including discrete event system simulation, continuous system simulation, and hybrid simulation. 3.1. Computer Simulation Computer simulation was developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 915
of 12 hard spheres using a Monte–Carlo algorithm. Computer simulation is often used as an adjunct to, or substitution for, modeling systems for which simple closed form analytic solutions are not possible. There are many different types of computer simulation; the common feature they all share is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible. Computer models were initially used as a supplement for other arguments, but their use later became rather widespread. A computer simulation, a computer model or a computational model is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system. Computer simulations have become a useful part of mathematical modeling of many natural systems in physics (computational physics), chemistry and biology, human systems in economics, psychology, and social science and in the process of engineering new technology, to gain insight into the operation of those systems, or to observe their behavior (Strogatz, 2007). Computer simulations vary from computer programs that run a few minutes, to network-based groups of computers running for hours, to ongoing simulations that run for days. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using the traditional paper-and-pencil mathematical modeling: over 10 years ago, a desert-battle simulation, of one force invading another, involved the modeling of 66,239 tanks, trucks, and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program (Jet Propulsion Laboratory, 1997); a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex maker of protein in all organisms, a ribosome, in 2005 (Nancy, 2005) and the Blue Brain project at EPFL (Switzerland), began in May 2005, to create the first computer simulation of the entire human brain, right down to the molecular level (EPFL, 2005). 3.2. Process of Modeling Any real-life system studied by simulation techniques (or for that matter by any other OR model) is viewed as a system. A system, in general, is a collection of entities which are logically related and which are of interest to a particular application. The following features of a system are of interest (Strogatz, 2007): • • • •
Environment: Each system can be seen as a subsystem of a broader system. Interdependency: No activity takes place in total isolation. Subsystems: Each system can be broken down to subsystems. Organization: Virtually all systems consist of highly-organized elements or components, which interact to carry out the function of the system. • Change: The present condition or state of the system usually varies over a long period.
March 15, 2010
916
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
When building a simulation model of a real-life system under investigation, one does not simulate the whole system. Rather, one simulates those subsystems which are related to the problems at hand. This involves modeling parts of the system at various levels of detail. This can be graphically depicted using Beard’s managerial pyramid as shown in Fig. 1. The collection of blackened areas forms those parts of the system that are incorporated in the model. A simulation model is, in general, used to study real-life systems which do not currently exist. In particular, one is interested in quantifying the performance of a system under study for various values of its input parameters. Such quantified measures of performance can be very useful in the managerial decision process. The basic steps involved in carrying out a simulation exercise are depicted in Fig. 2. All the relevant variables of a system under study are organized into two groups. Those which are considered as given and are not to be manipulated (uncontrollable variable) and those which are to be manipulated so that to come to a solution (controllable variables). The distinction between controllable and uncontrollable variables mainly depends upon the scope of the study. Another characterization of the relevant variables is whether they are affected or not during a simulation run. A variable whose value is not affected is called exogenous. A variable having a value determined by other variables during the course of the simulation is called endogenous. For instance, when simulating a single server queue, the following variables may be identified and characterized accordingly. Exogenous variables: • The time interval between two successive arrivals • The service time of a customer
Figure 1. Beard’s managerial pyramid. (This figure is cited from Harry Perros’s textbook that is available for free download from his Web site: http://www.csc.ncsu.edu/ faculty/perros//simulation.pdf.)
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 917 Define the problem
a (Alternatives) Design the simulation experiments
Analyze the data
Formulate the submodels
Run the simulator
Combine the submodels
Analyze the results
Collect the data
Implement the results
Write the simulation program
Debug
Validate the model
Earlier steps
a
Figure 2.
Basic steps involved in carrying out a simulation study.
• Number of servers • Priority discipline Endogenous variables: • Mean waiting time in the queue • Mean number of customers in the queue The above variables may be controllable or uncontrollable depending upon the experiments we want to carry out. For instance, if we wish to find the impact of the number of servers on the mean waiting time in the queue, then the number of servers becomes a controllable variable. The remaining variables — the time interval between two arrivals and the service time — will remain fixed (uncontrollable variables). Some of the variables of the system that are of paramount importance are those used to define the status of the system. Such variables are known as status variables. These variables form the backbone of any simulation model. At any instance, during
March 15, 2010
918
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
a simulation run, one should be able to determine how things stand in the system using these variables. Obviously, the selection of these variables is affected by what kind of information regarding the system one wants to maintain. We now proceed to identify the basic simulation methodology through the means of a few simulation examples. 3.3. Simulation Modeling Methodology Simulation models are broadly developed for studying supply chains, which generally had two main characteristics to classify. First, they are either discrete or continuous systems to simulate operations in supply chain (Jet Propulsion Laboratory, 1997). • A discrete system is one in which the state variables change instantaneously at separated time points. Examples of discrete simulation system in supply chain studies include Wei and Krajewski (2000) and Sivakumar (2001). • A continuous system is one in which the state variables change continuously with respect to time (Harry, 2008). Examples of continuous simulation system include Homem-de-Mello et al. and Tagaras and Vlachos. As few cases in the real world are solely discrete or continuous system, the system classification largely depends on researchers’ analytical perspective. Secondly, the models are either the event orientation, activity scanning orientation, or process orientation. These characteristics are helpful for researchers to simulate and to monitor supply chain activities. 4. Requirements for Supply Chain Simulation In this section, some modeling necessary requirements are listed for accurately analyzing the supply chain issues by using simulation technology. These requirements are listed below. 4.1. Data General-purpose business process simulation tools typically do not support detailed supply chain data structures. Here is an example of data structures that are often critical to supply chain modeling (Table 1). 4.2. Processes The Supply Chain Operations Reference (SCOR) model (http://www.supplychain.org) provides a starting point for building a simulation model of a supply chain. The SCOR model identifies five fundamental SCM processes: plan, source, make, deliver, and return. It is extremely useful to model fundamental processes
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 919
Table 1.
Example of Data Requirements for Supply Chain Model.
Area Manufacture process information
Manufacture time information
Inventory control policies information
Procurement and logistics information
Demand information
Policies/strategies information
Data required Manufacturing process data (number of machine in each process, alternate route) Calendar data (shift information, holiday information, preventive maintenance information) Product definitions Machine data (number of machine, alternate resources data) Bill of material structure Manufacturing time data (process time, queue time, set-up time) Machine time data (mean time to failure, mean time to repair, preventive maintenance time) Storage space (definitions including storage size and associated costs) Safety stock level, reorder point Inventory level of finished products, raw material, and intermediate parts Any stock location in shop floor Lot sizes for inventory replenishment Supplier lead-time Supply constraints (supply lot size, external supplier over a period, supplier capacity, procurement horizon, procurement time) Locations of customers, distribution centers, and manufacturing sites Routes between locations and the transport time between them Customer classes (describing customer service requirements for different types of users) Due data Priority Start and end data Demand pattern Order control policies, dispatch policies
within the context of well-known supply chain business functions. The functional processes of SCOR model are sufficient to model a variety of supply chain issues across many industries. It is important to understand the scope of each fundamental process with respect to the business functions. The plan process can apply to a single business function
March 15, 2010
920
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
or to a set of business functions. For example, a manufacturing function may plan only its own activities based on inputs it receives from other business functions in its supply chain. In other cases, planning may be performed across business functions in an attempt to maximize overall supply chain value. For this reason, three pure planning functions have been included in our list of business functions. The other fundamental processes — source, make, deliver, and return — normally apply to only a single business function. For modeling purposes, one can parameterize each business function in terms of the fundamental processes it executes. The following descriptions provide a high-level overview of this parameterization: • Customer: This business function represents end customers that issue orders to other business functions. Customer functions execute the fundamental processes plan, source, and return. Orders are generated on the basis of customer demand, which may be modeled as a sequence of specific customer orders (possibly obtained from historical records) or as aggregated demand over a period (that must be randomly disaggregated during a simulation run). The customer function may also specify the desired due date, service level, and priority for orders. Customer functions may send forecasts of future demand to other business functions. • Manufacturing: This business function models assembly and maintains raw material and finished goods inventory. Manufacturing executes the fundamental processes plan, source, make, deliver, and return. Note that one manufacturing function can supply another manufacturing function, so there is no need to have a distinct function to model suppliers. A manufacturing function makes use of modeled information such as the types of manufactured products, their manufacturing cycle time, bills of material, manufacturing and replenishment policies for components and finished goods, reorder points, storage capacity, manufacturing resources, material handling resources, and order queuing policies. • Distribution: This business function models distribution centers and warehouses, including finished goods inventory and material handling. Distribution functions execute the fundamental processes plan, source, deliver, and return. A distribution model typically includes inventory replenishment policies, reorder points, storage capacity, material handling resources, and order queuing policies. • Retail: This business function models retail stores, including finished goods inventory and material handling. Retail stores execute the fundamental processes plan, source, deliver, and return. A retail model typically includes inventory replenishment policies, safety stock policies, reorder points, material handling resources, backroom storage capacity, and shelf space. • Transportation: This business function models transportation types (e.g., trucks, planes, trains, and boats), cycle time between shipping locations, vehicle loading, and transportation costs. Transportation executes the fundamental processes plan, deliver, and return.A transportation model typically includes order batching policies (by weight or volume), material handling resources, and transportation resources.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 921
• Inventory Planning: This business function models periodic setting of inventory target levels. Inventory planning executes the fundamental process plan. This business function may link to an optimization program that computes recommended inventory levels based on desired customer serviceability, product lead times, and other considerations. • Forecasting: This business function models product sales forecasts for future periods. Forecasting executes the fundamental process plan. This business function may link to an optimization program. • Supply Planning: This business function models bill-of-material explosion and allocation of production and distribution resources to forecasted demand under capacity and supply constraints. Supply planning executes the fundamental process plan. This business function may link to an optimization program. In a supply chain, it is important to distinguish between execution and planning processes. Execution processes are driven by plans and policies generated by planning processes. Both information and physical goods enter and leave execution processes. Planning processes deal only with information, not physical goods. Three of the business functions listed above are pure planning functions: forecasting, inventory planning, and supply planning. The other business functions can have a mixture of execution and planning processes. 4.3. Entities In a simulation model, the items that enter and leave business processes are often referred to as entities or artifacts. Here is a list of entities that are specific to supply chain processes: • Request orders represent customer or replenishment orders for physical goods. These entities carry order information from customers to manufacturing and distribution functions and from manufacturing and distribution functions to other manufacturing and distribution functions. • Filled orders represent customer or replenishment orders for which physical goods have been provided. These entities carry order physical goods from manufacturing and distribution functions to customer, manufacturing, and distribution functions. Filled orders may pass through transportation functions, where aggregation and transport occur. • Shipments represent a group of filled orders in transport. These entities carry filled orders from transportation functions to customer, manufacturing, and distribution functions. • Forecasts represent demand forecasts for customer and replenishment orders. These entities often carry demand forecast information from forecasting functions to supply planning, manufacturing, and distribution functions. It is also possible for
March 15, 2010
922
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
a customer, manufacturing, or distribution function to have its own local forecasting process. If such a function shares its forecasts with other supply chain functions, it would do so by sending forecast entities. • Supply plans represent production and procurement plans generated by a supply planning function, often based on forecast information. These entities usually carry information from supply planning functions to distribution and manufacturing functions. 4.4. Resources The resource models provided by general-purpose business process simulators are often useful for supply chain simulation. As cycle time and resource cost are key metrics in both business process and supply chain simulations, business process resource definitions can sometimes be reused for supply chain simulation. However, additional parameters and constructs are often needed to model the following supply chain resources: • Storage resources model cost and capacity of space where manufacturing, distribution, and transportation functions store physical goods. • Material handling resources model cost and capacity of personnel and equipment used to move physical goods within manufacturing, distribution, and transportation functions. • Manufacturing resources model cost and capacity of personnel and equipment used to manufacture physical goods in manufacturing functions. • Transportation resources model cost and capacity of vehicles such as trucks, trains, and ships in transportation functions. 4.5. Variables A variable (or a global variable) is a piece of information that some characteristic of your system, regardless of how many or what kinds of entities, might be around. You can have many different variables in a model, but each one is unique. There are two types of variables: simulator built-in variables (number in queue, number of busy resources, simulation time, etc.) and user-defined variables (number in system, current shift, etc.). In contrast to attributes, variables are not tied to any specific entity, but rather pertain to the system at large. They are accessible by all entities, and many can be changed by any entity. If you think of attributes as tags attached to the entities currently floating around in the room, then think of variables as (erasable) writing on the wall. Variables are used for lots of different purposes. For instance, the time to move between any two stations in a model might be the same throughout the model, and a variable called transfer time could be defined and set to the appropriate value and then used wherever this constant is needed; in a modified model where this is set to a
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 923
different constant, you should only need to change the definition of transfer time to change its value throughout the model. Variables can also represent something that changes during the simulation, like the number of parts in a certain subassembly area of the large model, which is incremented by a part entity when it enters the area and decremented by a part when it leaves the area. Simulator variables can be vectors or matrices if it is convenient to organize the information as lists or 2-dimensional tables of individual values. 5. Supply Chain Simulation Tools Introduction and Selection More and more simulation tools are provided in recent years, they, especially for supply chain, are Arena, Simul8, eMPlant, Extend, etc. Descriptions and comparisons of the above-mentioned tools were made in this section, and a simulation software (Fig. 3) evaluation exercise has been conducted to make suggestions on possible software platforms to be used for the case study in Sec. 6. 5.1. Literature Survey on Discrete-Event Simulation Software There are many researchers or research organization features a series on surveys or comparisons of simulation software, such as the Institute for Operations Research (Varma et al., 2007) and the Management Sciences (OR/MS Today), and ARGE Simulation News (http://www.argesim.org/comparison/index.html). OR/MS Today makes simulation software biennially. These surveys summarize important product features of software packages, including price. The information in the survey was provided by the vendors in response to a questionnaire developed by James Swain. Simulation News Europe (Sivakumar, 2001) has also published a number of case studies that provide solutions based upon two or more simulation products.
Figure 3.
Papers referring to simulation software.
March 15, 2010
924
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
Two of the case studies are of particular relevance: a case study for “Flexible Assembly System,” compares features for submodel structures, control strategies, and optimization of process parameters; another case study for “SCM,” addresses discrete modeling and simulation. A selection methodology developed for the Accenture worldwide simulation team provides a framework in which users can evaluate products in seven areas (e.g., model development, animation etc.) as the basis for deciding upon a suitable tool (Tewoldeberhan et al., 2002). The user can score competing products in the seven areas and use those scores as the basis for making an informed decision about product choice. Besides, Schriber and Brunner (2003) provide simulation practitioners and consumers with grounding in how discrete-event simulation software works. April et al. (2003) provide a “practical introduction to simulation optimization” and how different optimization methods work and are implemented. More detailed information on individual simulation software products can be found from software overviews, introductions, invited papers, presentations, and product Web sites. In the surveys of OR/MS Today in 2003, there are 48 products listed in the sixth biennial survey (some are different versions from the same vendor), 32 products (some overlapping with the 48) on the Simulation News Europe Web site related to supply chain/logistics. Other papers and Web search also produced additional ones. To reduce the initial list to a final list of several most suitable software packages, a three-stage process was practiced. • Initial cut off • Selection of packages best suited to supply chain • Detailed evaluation of short-listed software After processing, the result is shown in Table 2. 5.2. Simulation Software Selection Conclusion From Table 2, the five packages with the highest scores were retained. Taking into consideration of other factors such as familiarity and user base, it seems that the five products Arena, Quest, Witness, Extend, and ProModel all meet the main requirements for supply chain simulation, although Arena is slightly ahead of the rest. The authors thus suggest that one of these five be selected. 5.3. The Introduction of Arena Arena is an object-based, hierarchical modeling tool that addresses a wide range of applications. It gives the simulation analyst a decision-support tool that combines the capabilities and power of a simulation language with an easy user interface for model building. Arena is a comprehensive toolset that spans the scope of a
March 15, 2010 14:46
Criteria
AnyLogc
Arena
AutoMod
Entiteprise dynamics
Extend
Flexm
ProModel
Quest
Simu8
Witniss
3 2.71
2.5 2.3
2 2.57
2.67 2.71
2 2.7
2 2
3 3
2.33 2.43
3 2.5
8
2.5
2.5
2.6
2.7
2.5
2.5
2.7
2.5
2.5
2.5
7.6 6.3 7.6
2 2.5 2
2 2.67 2.38
2 3 2.5
2.33 2.33 2.38
2.33 1.33 2.5
2 3 1.5
2 2.67 2
2.5 3 2.5
2 1 1.75
2 3 2
6.6 5.9
2.5 2
2.33 3
2 2
1.67 2
2.33 2
2.7 2
2 3
2 2
2.67 2
2 2
5.6
1
2
2
1.5
2.5
1.5
2
1
3
2.5
Note: Weighting is on a 0–10 scale and individual scores are on a 0–3 scales.
b778-ch37
1 3
SPI-b778
5.6 9.5
Computer Simulation in Supply Chain Management 925
Vendor Model develops and input Simulation and optimization engine Execution Animation Testing and efficiency Output Experimental design 1 user
Weight
Comparison of Simulation Software.
WSPC/Trim Size: 9.75in x 6.5in
Table 2.
March 15, 2010
926
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
simulation project from input data analysis to the analysis and presentation of model results. Through the use of Application Solution Templates, the analyst works in a simulation environment that defines the personality of Arena for him or her. Templates provide an environment that can be customized to represent the industry or application of the analyst. The use of these templates also facilitates enterprisewide simulation by making the power of simulation available to a wider audience of potential users. A search of the Internet will show you a range of products all claiming to do the same thing, make your work easier, improve your productivity, and save your money. But how can you decide what really sets one product apart from all the rest? Rockwell company did a search to evident the cited rate by academic papers in the proceedings of the world’s leading conference on discreteevent simulation — the Winter Simulation Conference (www.wintersim.org). They searched for the number of papers that mention the various simulation packages, here is what they observed. It is hard to find reliable, feature-by-feature comparisons of all the simulation packages in the market. Yet, when you look at the empirical evidence, it is easy to see what the world body of simulation users think about the various tools available. Clearly, Arena is the undisputed tool of choice among serious users of business process simulation. It is neither the least expensive package in the market, nor most expensive; it is simply the best. No other explanation of the facts presented above makes sense. 6. A Supply Chain Simulation Case Study To illustrate how to build a supply chain simulation model to support analysis and decision activities, a supply chain case (Jun and Sun, 2007) is studied here. This case shows how the hierarchical simulation models were implemented, by using submodels, as well as the specific and managerial functions of suppliers, manufacturers, retailers, and the consumer market. In this case, Arena is selected as simulation tool to build simulation models and system execution platform. 6.1. Problem Description In this section, a hierarchical supply chain inventory model will be built; its network structure is shown in Fig. 4. In Fig. 4, there is one manufacturer in the first stage, the second stage is composed of two distribute centers, which belong to the different markets district and the third stage is of five retailers. Among the retailers, Retailer 1 and Retailer 2 take charge of Market 1, and the rest take charge of Market 2. Once a market demand emerges, a retailer will be selected first, if the selected retailer could meet the demand, the order of the market is filled; otherwise the out of stock is happened and the retailer will send replenishment order to its upper distributer center, and the similar as distributer center.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 927 DC (Distribute center)
Retailer 1 Market 1
DC 1 Retailer 2
Manufacturer
Retailer 3
DC 2 Retailer 4
Market 2
Inventory
Retailer 5
Figure 4.
Network structure of hierarchical supply chain inventory model.
The simulation target of this case is to analyze the inventory level and service level of this simplified supply chain. 6.2. Arena Model Overall View The overall view of Arena model is shown in Fig. 5, the top-level model is composed by four parts: • • • •
Process of retailers selling and inventory decision making Process of order fulfilling in DCs Process of order fulfilling by manufacturer Process of manufacturer producing
Figure 5.
Overall view of arena model.
March 15, 2010
928
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
6.3. System Preferences 6.3.1. Variables In this case, 13 variables are defined, which are shown in Table 3. 6.3.2. Expressions Expression is used to importing some more complex mathematical expressions. In this system, four basic formulas were referred, as shown in Table 4 and Figs. 6 and 7. 6.3.3. Sets In this case, the counter of each member needs to be set. Sale Count registers the selling records, Lost Sales Count registers the count of unmet needs as of retailer’s out of stock, Order Count registers the numbers of sending orders from retailers, DC Stockout registers the numbers of out of stock of DCs, DC Order Count registers the numbers of sending orders from DCs. These sets in the ARENA platform are shown in Figs. 8 and 9. Table 3. Name
Row
Inventory Order size On order Recorder point DC order size DC reorder point DC inventory DC on order DC backorder Production lotsize Security level DC to retailer Factory to DC
5 5 5 5 2 2 2 2 2 1 1 5 2
Inventory System Variables. Meaning
Inventory of retailers Order quantity Order quantities of not serviced Order point of retailer Order quantity of distribution center Order point of distribution center Inventory of distribution center Order quantities of not services The quantity produced as the result of one operation The security inventory of finished product The inventory by the way from DC to retailers The inventory by the way from manufacturer to DC Table 4.
Formula name Delivery Time Leadtime Total retailer Inv Total DC Inv
Basic Formula. Signification
Lead time from distribution center to retailers Lead time from manufacturer to distribution center The total inventory of all retailers The total inventory of all distribution centers
Value
None
None None 1000 100 None None
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 929
Figure 6.
Figure 7.
Formulas definition.
Expression importing.
Figure 8.
Sets definition.
Figure 9.
Sets importing.
March 15, 2010
930
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
6.3.4. Advanced sets Advanced sets and sets have the same function, but only the type of members is different. Queue, Storage and others can be set as members. Here, two Queue members are set, which represent waiting shipment retailer order of two DCs. Part of the snapshot is shown in Fig. 10. 6.4. Modeling and Simulation The basic target of mutual inventory simulation modeling is to build the main interaction process simulation model among each enterprises, its detailed processes include: retailer selling and inventory decision-making process, order fulfilling of DCs processes, order fulfilling of factory, and manufacturer processes. Because the existing of lots of simple modules in this case, we take Retailer selling and inventory decision-making process as an example to describe an Arena simulation submodel. The snapshot in this process model is shown in Fig. 11, which includes three parts: • Markets and its correlative attributes • Retailers selecting • Order decision
Figure 10. Advanced set importing.
Figure 11.
Retailer selling and inventory decision-making process.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 931
Figure 12.
Figure 13.
Market 1 demand module setting.
Demand 1 information module setting.
6.4.1. Markets and its features There are two markets, where the demand comes from. The demand is created by the Create module of Market 1 Demand and Market 2 Demand (Fig. 12). The time intervals of the two markets’ demands are different. The time of generating one demand of Market 1 is an average of 0.25 h, and the Market 2 is of 0.2 h. The correlative attributes of the two markets are set by the Assign module, whose names are Demand 1 Information and Demand 2 Information (Fig. 13). 6.4.2. Retailers selecting After selecting a retailer, customers send the demand order, and meanwhile the retailers check for the storage. This process is implemented by the Decide module which named Check Retailer Inventory (Fig. 14).
March 15, 2010
932
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
Figure 14.
Check retailer inventory module setting.
Figure 15.
Record sales module setting.
When the retailers find the storage is not enough after checking the storage, the process of below the decide module is executed. It is supposed that customer will not wait and select another retailer. So, in this model, the Out Of Stock loss will happen when the demand is not met. The counters of loss are registered by Record Lost Sales module, and the index of retailers is also stored. The module setting is as in Fig. 15. When the retailers find the storage is adequate after checking, the process of above the decide module is executed. Selling is success, and order fulfilling processes finish. On the Arena platform, the Assign module of Demand Fulfillment (Fig. 15) is used to updating parameters firstly, then the Count module (Fig. 16) is registered by the selling counters (Fig. 17). 6.4.3. Order decision No matter whether the selling has succeeded or not, the order demand needs to be checked. The order decision is according to: if the sum of on order (retailer index) and Inventory (retailer index) is more than reorder point (retailer index). The order decision is implemented by the module of Decide Module named Retailer Order Decision (see Fig. 18).
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 933
Figure 16.
Demand fulfillment module setting.
Figure 17.
Figure 18.
Record sales module setting.
Retailer order decision module setting.
Two kinds of disposing schedulers could be concluded according to the Decide module (Fig. 18): if the order does not need to be sent, finish the selling; or else take a place of one order, change the entity type to Retailer Order (Fig. 19) in the module of Retailer Place Order (Fig. 20) and the value of on order is added.
March 15, 2010
934
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
Figure 19.
Figure 20.
Record order module setting.
Retailer place order module setting.
6.5. Setting of Simulation Indicators and Operation Parameters 6.5.1. Indicators setting The indicators in this case are shown in Table 5, including average inventory level of retailer, inventory level of DCs, service level of each retailers, DC, and manufacture, total inventory of retailers and the Standard Division of total inventory of DCs and manufacture. 6.5.2. Operation parameters setting Assume the running time is 100 days, and the warm-up time is 2 weeks. One day is set 24 h, and repeat simulation times are 10 h. The setting snapshot is shown in Fig. 21. 6.6. The Analysis of Simulation Results Studying the simulation results (Table 6), the ordinal increase by degrees relation can be observed from Retail Inventory Variation, DC Inventory Variation, and FGI
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 935
Table 5. Indicator name Retailer 1 inventory Retailer 2 inventory Retailer 3 inventory DC 1 inventory DC 2 inventory Manufacturer inventory Retailer 1 inventory level Retailer 2 inventory level Retailer 3 inventory level DC 1 inventory level DC 2 inventory level
Simulation Indicators.
Expression Inventory(1)+DC Retailer(1) Inventory(2)+DC Retailer(2) Inventory(3)+DC Retailer(3) DC Inventory(1) + factory DC(1) DC Inventory(2) + factory DC(2) (MR(product) − NR(product))*100 DAVG (Retailer 1 inventory) DAVG (Retailer 2 inventory)
Retailers inventory
Distribution centers Inventory Finished product inventory Average inventory level of retailers in repetitious simulation
DAVG (Retailer 3 inventory) DAVG (DC 1 inventory) DAVG (DC 2 inventory)
Manufacturer inventory level
DAVG (Manufacturer inventory)
Retailer 1 fill rate
NC (SalesCount1)/ (NC(SalesCount) + NC(LostSales 1)) Total retailer inventory Total DC inventory DSTD (Retail inventory)
Retailer inventory DC inventory Retail inventory variation DC inventory variation
Signification
DSTD (DC inventory)
Average inventory level of DCs in repetitious simulation Average inventory level of manufacturer in repetitious simulation Service level of retailers
Total inventor of retailers Total inventor of DC Standard division of retailer inventory Standard division of DC inventory
Variation, demonstrating the Bullwhip effect. We can adjust the current plan, and compare these plans for getting anticipant one. 7. New Methods for Integrating Supply Chain Simulation System Application The dynamic, nonlinear, and complex nature of a supply chain with numerous interactions among its entities are best evaluated using simulation models. Since
March 15, 2010
936
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
Figure 21.
Retailer place order module setting.
Table 6. Identifier Retailer 1 inventory level Retailer 2 inventory level Retailer 3 inventory level Retailer 4 inventory level Retailer 5 inventory level Retailer 1 fill rate Retailer 2 fill rate Retailer 3 fill rate Retailer 4 fill rate Retailer 5 fill rate DC 1 inventory level DC 1 service level DC 2 inventory level DC 2 service level Manufacture inventory level Retail inventory variation DC inventory variation FGI variation
Simulation Results.
Average
Half-width
Minimum
127.46 77.559 141.81 66.555 85.263 0.99409 0.98348 0.99018 0.98560 0.99457 261.50 0.86820 366.69 0.89198 350.48 72.667 183.18 240.47
0.56441 0.78338 1.0346 0.78844 0.69944 0.00255 0.00883 0.00520 0.00943 0.00304 4.1768 0.02685 6.1660 0.1765 14.142 2.2813 5.9324 8.1682
125.85 75.224 139.11 64.359 83.043 0.98726 0.95622 0.97354 0.96058 0.98513 250.32 0.79592 354.73 0.84211 320.55 68.256 171.95 215.50
Maximum 128.71 79.037 144.05 67.759 86.395 0.99976 0.99804 0.99628 0.99736 0.99900 271.00 0.94000 378.89 0.92105 377.96 77.700 194.36 253.02
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 937
Forrester discovered the fluctuation and amplification of demand from the downstream to upstream of the supply chain, there has been a lot of literature analyzing this phenomenon. The source of such fluctuation and amplification of order and inventory is mainly due to the lack of sharing of production information between enterprises in the supply chain. So using the shared information, each supply chain entity can make better decisions on ordering, capacity allocation, and production/material planning so that the supply chain dynamics can be optimized. When dealing with the dynamics and the collaboration of supply chains, analytical models alone may not provide satisfactory benchmarks and so simulation is potentially a more feasible tool (Swaminathan et al., 1998). At the same time, the uncertainty involved in all the transactions and the demand patterns is the main obstacles in creating a seamless supply chain. Simulation is one of the effective means of analyzing variability in supply chains (Schunk and Plott, 2000). In this section, some methods for a complex supply chain simulation are introduced. Here are two examples: Multiagent system (MAS) and DS techniques. 7.1. MAS for Integrating Supply Chain Simulation System 7.1.1. Structure and function of the MAS in supply chain simulation system As shown in Fig. 22, MAS is based on the structure of a mediator. The system is composed of diverse agents. That is, registry server, order management, supplier management, outsourcing company management, inventory analysis, process, planning, and scheduling. The role of each agent in the MAS is each role of the supply chain. The ability to model complex systems such as supply chain simulation systems is the significant improvement of multiagent simulation. The multiagent simulation is a more natural way to capture behavior in a system, particularly emergent ones. Multiagent simulations can handle both the microlevel and dispersive aspects of the supply chain simulation system as well as the macrolevel. MASs are becoming increasingly popular due to their inherent ability to model the distributed and autonomous features (including information asymmetry) of various entities constituting a Supply Chain (SC) in the most natural and realistic way. 7.1.2. MAS as a suitable solution for SCM MAS is the most widely known solution for distributed problem solving. As each of supply chain process is independent and the whole process is complex, an MAS can be a suitable solution. An MAS is emerging as a new paradigm to solve complicated problems under the diverse environmental changes. An MAS enables the solution of complicated problems under the diverse environment through communications between agents. Cooperation and coordination between agents are probably the most important
March 15, 2010
938
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
Figure 22. The structure of the MAS.
features of MAS and at the same time the collaboration is a very important feature of the supply chain. Recently, many enterprises are moving toward open architectures for integrating their activities with those of their suppliers, customers, and partners within supply chain.Agent-based technology provides a natural way to design and implement such environments. A number of researchers have attempted to apply agent technology to manufacturing enterprise integration, SCM, manufacturing planning, scheduling, and control. The supply chain is a process that spans the entire enterprise integrating processes based on flawless delivery of basic and customized services. SCM is an approach to satisfy the demands of customers for products and services via integrated management in the whole business process from raw material procurement to the product or service delivery to customers. In the integrated SCM system each agent performs one or more SCM functions, and coordinates its decisions with other relevant agents.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 939
Multiagent SCM (MA-SCM) is a multi-agent paradigm for the computerization of SCM. A supply chain can naturally be viewed as a society of autonomous agents, and an SCM system can be viewed as MAS, where supply chain partners are represented by different agents, which provide certain functions of the supply chain system. Each agent performs one or more supply chain functions independently, and each coordinates his action with other agents. MA-SCM has been applied in various industries; the multiagent model consists of a set of agents and a framework for simulating their decisions and interactions. The basic building blocks are the multiple agents modeling the partners in the supply chain. Multiagent modeling and simulation have many advantages in SCM. The natural ability of MAS to solve problems depends on the cooperation, which better reflects the behaviors of supply chain partners. Agents can be deployed as real-time decision support software to its modeled supply chain partners. MAS has been applied in many aspects, they could be found in research papers. Here are some example paper titles: A Distributed Multi-Agent Model for Value Nets, Multi-Agent Based Integration Scheduling System, Multi-Agent Modeling and Simulation for Petroleum Supply Chain, Study on Multi-Agent-Based Agile SCM and so on.
7.2. Distributed and Parallel Simulation (PS) Techniques for Integrating SCS System 7.2.1. Application of DS for supply chain The goal of supply chain business integration is to generate an “agile” and multifirm cooperative system, capable of exploiting profitable opportunities in a volatile market and promoting cooperation between component firms, while enhancing their autonomy and availability. Distributed and parallel supply chain simulation has been identified as the best means to perform what-if analysis on supply chains. It offers analysts and decision makers a means to replicate the behaviors of complex systems. There is also a reduction of costs and time of building a new model, which is derived from the reuse of existing models. The simulation can be realized with a single model reproducing all the nodes (local simulation) or through several models (typically one for each node) running in parallel in a single coordinated environment (DS). General differences and application fields of the two simulative approaches have been largely discussed in texts. PS is concerned with the execution of simulation programs on multiprocessor computing platforms, while DS is concerned with execution of simulations on geographically distributed computers interconnected via a network, local or wide. Both cases imply the execution of a single main simulation model, made up by several subsimulation models, which are executed, in a distributed manner, over multiple computing stations. Hence, it is possible to use a single expression, Pallet Design
March 15, 2010
940
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
System (PDS), referred to both situations. PDS paradigm is based upon a cooperation and collaboration concept in which each model co-participates to a single simulation execution, as a single decision maker of a “federated” environment. In a supply chain simulation, each company can be seen as an independent model. The modeling details of each company (model) are then encapsulated within the company itself. To simulate the supply chain, participating companies need to define the data that are going to be shared. Shared data are then exchanged as messages during the simulation run. Messages can be transmitted through a network connecting the participating companies. In the specific field of supply chain simulation, the parallel and distributed approach has been progressively considered the most viable. This is due to its undeniable advantages (Fujimoto, 1999): • The possibility of developing complex models preserving proprietary information associated with individual systems. • The correspondence between model and node, guaranteeing a real and updated representation of the single industrial units. • A reduction of the simulation time, taking advantage of the additional computing power of the distributed parallel processors. • To integrate different simulation models that already exist and to integrate different simulation tools and languages: Simulation models of single local subsystems may already exist before designing a PDS and may be written in different simulation languages and executed over different platforms. • To increase tolerance to simulation failures: This is a potential benefit for particular simulation systems. Within a PDS, composed by different simulation processors, if one processor fails, it may be possible for others processors to go on with simulation runs without the down processor. Nevertheless, this solution requires complex IT, which often limits its use due to the need for a platform for synchronicity and data sharing, for the development of interfaces to allow integration of the models in a shared architecture and for the reliability of data transferred on the network. The practical execution of DS needs a framework for time and information sharing management. High level architecture (HLA) (DMSO, 1999) is the most known PDS framework. HLA is a standard PDS architecture developed by the US DoD for military purposes and nowadays is becoming an IEEE standard. Due to the above-mentioned potential advantages and to the still open IT issues, the DS approach in the supply chain context is a topic of great interest for the scientific and industrial communities, as it is confirmed by recent research projects. 7.2.2. The framework for distributed and parallel supply chain simulation HLA is a general purpose architecture for distributed computer simulation systems. Using HLA, computer simulations can communicate to other computer simulations
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 941
regardless of the computing platforms. Communication between simulations is managed by a run-time infrastructure (RTI) (simulation). HLA supports the reuse of capabilities available in different simulations and the possibility of distributed collaborative development of a complex simulation application. Thus, a set of simulations, possibly developed independently, can be put together to form a large federation of simulations. Using the HLA, each participating simulation in the federation can define the information that it likes to share with others, but its internal behavior (and data) is completely invisible to the outside world. With its capability, the HLA seems to be a natural candidate for developing a cross enterprise supply chain simulation. HLA consists of the following components: • Interface specification. The interface specification document defines how HLA compliant simulators interact with the RTI (simulation). The RTI provides a programming library and an application programming interface compliant with the interface specification. • Object Model Template (OMT). The OMT specifies what information is communicated between simulations and how it is documented. • HLA rules. Rules that simulations must obey to be compliant to the standard. In HLA, each individual model is a federate. A collection of federates that form the whole simulation system is a federation. To apply this to a supply chain simulation, the federate is thus the basic simulation model of each individual company. Each company defines only data that they are willing to share in the SOM using the OMT of the HLA. Details of HLA’s OMT can be found in lots of research papers. Sensitive information on the model is hidden within the company. The simulation time synchronization of federates is achieved automatically through the time management services of HLA. In recent years, a number of modeling and new simulation frameworks using distributed techniques have been developed, including (XMSF SAIC Web-Enabled RTI, 2003) Extensible Modeling and Simulation Framework (XMSF) and the simulation grid system Cosim-Grid (Li et al., 2005). Service-oriented architecture (SOA) has emerged, as a new software development paradigm in recent years, in which major activities in application building are integrating existing services to deliver required functionalities. New simulation environments must provide facilities to simulate such activities of SOA-based applications. Pioneering work in this direction has been done in XMSF and Cosim-Grid. 8. Conclusions Beginning with an overview of opportunities of using simulation in SCM, this chapter has discussed the means by which a competitive supply chain might be modeled, measured, and managed. The computer simulation basic concepts, modeling processes and modeling methodologies are briefly introduced. Model Commercial
March 15, 2010
942
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
H. Jia and P. Zuo
simulation software has been reviewed. Taking a hierarchical supply chain inventory control analysis problem as a study case, this chapter describes the simulation modeling approach. Some new techniques and the research trends are presented in the end of this chapter, such as MAS and distributed and PS by which more complicated and dynamic supply chain system could be rapidly modeled and be reconfigured to allow supporting a variety of SCM analysis. References April, J, F Glover, JP Kelly and M Laguna (2003). Practical introduction to simulation optimization. Simulation Conference 2003. In Proceedings of the 2003 Winter, Vol. 1, 71–78. Chatfield, DC, TP Harrison and JC Hayya (2008). European Journal of Operational Research, doi:10.1016/j.ejor.2008.03.027. Christopher, M (1998). Logistics and supply chain management. Financial Times Press, 1–20. Defense Modeling and Simulation Office (DMSO) (1999). RTI Programmer’s Guide, Version NG. hla.dmso.mil. Homem-de-Mello, T, A Shapiro and ML Spearman (1999). Finding optimal material release times using simulation-based optimization. Management Science, 45(1), 86–102. Fujimoto, R (1999). Parallel and distributed simulation. In Proceedings of the 1999 Winter Simulation Conference, IEEE, Piscataway, NJ, 122–131. Gan, BP, L Liu, S Jain, SJ Turner, W Cai and WJ Hsu (2000). Distributed supply chain simulation across enterprise boundaries. In Proceedings of the 2000 Winter Simulation Conference, 1245–1251. Orlando, FL, USA. Harry, P (2008). Computer simulation techniques: The definitive introduction. http://www. csc.ncsu.edu/faculty/perros//simulation.pdf. Jet Propulsion Laboratory, Cal Tech (1997). Researchers Stage Largest Military Simulation Ever. Webpage: JPL. Jun, Z and B Sun (2007). Logistics System Simulation. Publishing House of Electronics Industry of China (in Chinese). Li, BH, X Chai,Y Di, HYu, Z Du and X Peng (2005). Research on service oriented simulation grid, autonomous decentralized systems. In ISADS 2005 Proceedings, 7–14. Lin, G, M Ettl, S Buckley, S Bagchi, D Yao, B Naccarato, R Allan, K Kim and L Koening (2000). Extenede-enterprise supply-chain management at IBM personal systems group and other divisions. Interface, 30(1), 7–21. Nancy, A (2005). Largest Computational Biology Simulation Mimics Life’s Most Essential Nanomachine (news), News Release, Los Alamos National Laboratory, Webpage: LANL-Fuse-story7428. OR/MS Today (2007). Simulation Software Survey. http://www.lionhrtpub.com/orms/ surveys/Simulation/Simulation.html. ´ Project of Institute at the Ecole Polytechnique F´ed´erale de Lausanne (EPFL) (2005) Mission to Build a Simulated Brain Begins (news), Switzerland, NewScientist, Webpage: NewSci7470. Schriber, TJ and DT Brunner (2003). Inside discrete-event simulation software: How it works and why it. . . . Tecnomatix Technology.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
Computer Simulation in Supply Chain Management 943
Schunk, DW and BM Plott (2000). Using simulation to analyze supply chains. In Winter Simulation Conference, 1095–1100. Sivakumar, AI (2001). Multi-objective dynamic scheduling using discrete event simulation. International Journal of Computer Integrated Manufacturing, 14(2), 154–167. Strogatz, S (2007). The End of Insight, Brockman, John, What Is Your Dangerous Idea? 130–131. New York: HarperCollins. Swaminathan, JM, SF Smith and NM Sadeh (1998). Modeling supply chain dynamics: A multi-agent approach. Decision Sciences, 29(3), 607–632. Tagaras, G and D Vlachos (2001). A periodic review inventory system with emergency replenishment. Management Science, 47(3), 415–429. Tewoldeberhan, TW, A Verbraeck and E Valentin (2002). An evaluation and selection methodology for discret-event simulation software, ACM. In Proceedings of the 34th Conference on Winter Simulation: Exploring New Frontiers, 67–75. Varma, VA, GV Reklaitis, GE Blau and JF Pekny (2007). Computers and Chemical Engineering, 31, 692–711. Wei, J and L Krajewski (2000). A model for comparing supply chain schedule integration approaches. International Journal of Production Research, 38(9), 2099–2123. XMSF SAIC Web-Enabled RTI (2003). http://www.movesinstitute.org/xmsf/projects/ WebRTI/XmsfSaicWebEnabledRtiDecember.
Biographical Notes Jia Hongyu is an associate professor in Dalian Maritime University of P.R. China. She received a bachelor degree in Computer Science from Beijing Jiaotong University, and a master and PhD in Transportation Management and Engineering from Dalian Maritime University. Her research interests include System Modeling, System Simulation, Supply Chain Management and Logistics Information System. Zuo Peng is a master student major in Management Science and Engineering in Dalian Maritime University. His major courses include Data Warehouse Technology, Modern Management Science, System Engineering, Operational Research, Decision Support System, etc. He currently serves as Lab Manager in Liaoning Province Leading Laboratory of Logistics and Shipping Management Systems Engineering.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch37
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-Index
INDEX
accounting numbers, 639–642, 644, 648 advanced planning, 147, 148, 152 agrobusiness, 201, 202 analytic hierarchy process, 323 analytic network process, 515 anti-collision algorithms to electronic tags, 418 assembly lines, 384–386, 388, 391–394, 401, 402, 404, 407, 408, 411 asset integrity management, 323, 340
earnings management, 640–642, 646–648 effect size, 833–840 efficiency, 199–201, 204–212, 215, 217 enterprise resource planning, 297, 783 estimation theory, 791, 792, 799, 807 evolutionary computation, 383 framework, 297, 300, 302, 303, 311 game theory, 791, 792, 813, 815 grid computing, 853, 854, 856, 862
B2B, 889–895, 899, 903, 907 balanced scorecard (BSC), 700 bullwhip effect, 680–684 business information systems, 245 business models, 95, 100, 102, 103, 113 business organizations, 245, 247, 250, 253, 255, 264 business process management, 93, 99, 100 business process management system, 93, 100 business process reengineering, 117, 119 business processes, 890–893, 896, 898, 906, 907
healthcare, 4–6, 8, 21, 25–27, 29–43 healthcare interoperability framework, 25, 26, 33, 40–42 herd behavior, 817–823, 827, 829–832, 836, 839, 841–845 hospital, 47, 51, 55–57, 63–65, 69, 71, 73–84 information and communication technologies, 765, 766, 773, 775, 787 information sharing, 717–724, 726–731 information systems, 189, 297, 299, 537, 542, 547, 763, 765, 766, 768–770, 772, 773, 775, 776, 778–780, 782–785 information technology, 539 information systems, 189, 190 innovation, 359–362, 366, 367, 369–375, 377 intangible assets, 457–459, 461, 464 interoperability, 25–35, 38–43 Islam and neo-liberalism, 224 ISO 18000-6 standard, 418
case example, 533 chief information officer, 25 citation analysis, 487, 491, 495, 501, 506, 507 collaborative benchmarking, 280, 293 complexity, 25, 26, 29, 33, 35, 37, 38, 42, 43 computer, 911, 912, 914, 915, 940, 941 cultural auditing, 189–195 culture, 359, 360, 362, 366–371, 377 customer relationship management, 769, 785
Java, 889, 900
data mining, 607, 632, 638 decision-making, 765–767, 769, 773, 775, 778, 779, 782, 784, 785, 787 digital business eco systems, 607, 608, 638 digital world theory, 607, 616, 621 disclosure environment, 640, 643
knowledge, 655–659, 661–668, 670 Latin America, 539 logistics management, 189, 191, 192, 196 945
March 15, 2010
946
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-Index
Index
maintenance operations, 717, 722, 725, 729, 730 management decision making, 221, 228, 231, 234 manufacturing systems design, 383, 411 market share, 158 marketing, 565, 566, 568, 569, 572–579 meta-analysis, 817, 819, 831–835, 837–842, 845, 846 metacomputing, 853, 855, 856, 861, 862, 883 metaprogramming, 855, 856, 887 methodology, 297, 298, 300, 302, 303, 314, 315 model driven architecture (MDA), 25–27 multicriteria decision method, 522 multicultural organizations, 191 ombudsman, 47–57, 62–65 ontology, 360, 366–369, 371, 374, 377, 889–893, 895, 896, 898, 899, 904–907 operational efficiency, 457–459, 466, 467, 478, 480 optimization theory, 791, 792, 799, 813 oral history, 189, 191, 193 patents, 487, 490–509 patient representative, 55, 56 patients’ complaints, 47 performance indicators, 457–459, 469, 470, 480, 515–521, 527–533 performance measurement (PM), 117, 120, 357, 699–701 PLIB, 891, 893, 896, 899, 904–907 process management, 119 process modeling, 93–95 production line, 383, 384, 386, 388, 389, 407–409, 411, 412 publication bias, 839–842 quality assurance, 48, 50, 53 quality management system, 69, 70 Radio Frequency Identification (RFID), 415, 416, 431, 432, 441, 442, 444, 446, 448
relationship, 565–570, 572–575 research directions, 487, 490, 508 resources, 655, 658, 665–667, 670 RFID applications, 737–741, 744, 745 risk management, 297, 300, 310, 315 S˜ao Paulo-Brazil, 539 scheduling, 147, 148, 150–152, 154, 156, 157, 160, 161, 173, 179–184 SCOR model, 741, 744–748, 751, 754 selection, 515–519, 521, 526–528, 532, 533 Service Oriented Architecture (SOA), 9 service supply chain, 717, 718, 720, 724, 729–731 service-oriented architecture (SOA), 25, 26, 28, 856 simulation of rfid environments, 415, 417 small and medium-sized enterprise (SME), 544, 699, 700 strategy, 655, 656, 660, 662, 663, 665, 666, 668, 670 supply chain, 653, 655–665, 667–670 supply chain information, 911 supply chain management (SCM), 3, 675, 676, 679, 699, 700, 702, 737 survey research, 125, 136, 139 system theory, 222, 249, 256, 266, 267 technology, 655–665, 667, 669–671 technology landscape, 490, 493 technology trajectory, 493, 496 transfer of best practices, 587 transfer of management systems, 586 value chain, 147–149, 151, 153, 182, 184, 185 vendor managed inventory (VMI), 6 visualization, 490, 493, 496, 501, 502, 504–506 web services, 889, 891, 901, 902, 904