Rapid Modelling and Quick Response
Gerald Reiner Editor
Rapid Modelling and Quick Response Intersection of Theory and Practice
123
Editor Prof. Dr. Gerald Reiner Université de Neuchâtel Faculté des Sciences Économiques Institut de l’Entreprise (IENE) rue A.-L. Breguet 1 2000 Neuchâtel Switzerland
[email protected]
ISBN 978-1-84996-524-8
e-ISBN 978-1-84996-525-5
DOI 10.1007/978-1-84996-525-5 Springer London Dordrecht Heidelberg New York British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Control Number: 2010932606 Ó Springer-Verlag London Limited 2010 Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licenses issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc., in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Cover design: eStudio Calamar S.L. Printed on acid-free paper. Springer is part of Springer Science+Business Media (www.springer.com)
Preface
Rapid Modelling and Quick Response - Intersection of Theory and Practice This volume is a sequel of the 1st Rapid Modelling Conference proceedings volume that focused on Rapid Modelling for increasing competitiveness. The main focus of the 2nd Rapid Modelling Conference proceedings volume “Rapid Modelling and Quick Response - Intersection of Theory and Practice” is the transfer of knowledge from theory to practice, providing the theoretical foundations for successful performance improvement (based on lead time reduction, etc. as well as financial performance measures). Furthermore illustrations will be given by teaching/business cases as well as success stories on new software tools in this field as well as new approaches. In general, Rapid Modelling is based on queueing theory but other mathematical modelling techniques as well as simulation models which facilitate the transfer of knowledge from theory to application are of interest as well. Together with the proceedings volume of selected papers presented a the 1st Rapid Modelling Conference “Increasing Competitiveness - Tools and Mindset” the interested reader should have a good overview on what is going on in this field. The objective of this conference series is to provide an international, multidisciplinary platform for researchers and practitioners to create and exchange knowledge on increasing competitiveness through Rapid Modelling. In this volume, we demonstrate that lead time reduction (through techniques ranging from quick response manufacturing to lean production) is very important but not enough. Additional factors such as risk, costs, revenues, environment, etc. have to be considered as well. We accepted papers that contribute to these themes in the form of: • • • •
Rapid Modelling Case study research, survey research, action research, longitudinal research Theoretical papers Teaching/business case studies
v
vi
Preface
Relevant topics are: • • • • • •
Queueing Theory Rapid Modelling in Manufacturing and Logistics Rapid Modelling in Services Rapid Modelling and Financial Performance Measurement Product and Process Development Supply Chain Management
Based on these categories, the proceedings volume has been divided into six chapters and brings together selected papers which present different aspects of the 2nd Rapid Modelling Conference. These papers are allocated based on their main contribution. All papers passed through a double-blind referee process to ensure their quality. While the RMC10 (2nd Rapid Modelling Conference “Rapid Modelling and Quick Response - Intersection of Theory and Practice”) takes place at the University of Neuchˆatel, located in the heart of the city of Neuchˆatel, Switzerland, it is based on a collaboration with the project partners within our IAPP Project (No. 217891, see also http://www.unine.ch/iene-kje). We are happy to have brought together authors from Algeria, Austria, Belgium, United Kingdom, Finland, Germany, Hungary, Italy, Sweden, Switzerland, Turkey and the United States of America.
Acknowledgement We would like to thank all those who contributed to the conference and this proceedings volume. First, we wish to thank all authors and presenters for their contribution. Furthermore, we appreciate the valuable help from the members of the international scientific board, the referees and our sponsors (see the Appendix for the appropriate lists). In particular, our gratitude goes to our team at Enterprise Institute at the University of Neuchˆatel, Gina Fiore Walder, Reinhold Schodl, Boualem Rabta, Arda Alp, Gil Gomes dos Santos, Yvan Nieto, who supported this conference project and handled the majority of the text reviews as well as the formating work with LaTex. Ronald Kurz created the logo of our conference and he took over the development of the conference homepage http://www.unine.ch/rmc10. Finally, it has to be mentioned that the conference as well as the book are supported by the EU SEVENTH FRAMEWORK PROGRAMME - THE PEOPLE PROGRAMME - Industry-Academia Partnerships and Pathways Project (No. 217891) “How revolutionary queuing based modelling software helps keeping jobs in Europe. The creation of a lead time reduction software that increases industry competitiveness and supports academic research.” Neuchˆatel, June 2010
Gerald Reiner
Contents
Part I Queueing Theory Perturbation Analysis of M/M/1 Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Karim Abbas and Djamil A¨ıssani
3
Series Expansions in Queues with Server Vacation . . . . . . . . . . . . . . . . . . . . 17 Fazia Rahmoune and Djamil A¨ıssani Part II Rapid Modelling in Manufacturing and Logistics Optimal Management of Equipments of the BMT Containers Terminal (Bejaia’s Harbor) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Djamil A¨ıssani, Mouloud Cherfaoui, Sma¨ıl Adjabi, S. Hocine and N. Zareb Production Inventory Models for a Multi-product Batch Production System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Ananth Krishnamurthy and Divya Seethapathy Dependency Between Performance of Production Processes and Variability – an Analysis Based on Empirical Data . . . . . . . . . . . . . . . . . . . . 61 Martin Poiger, Gerald Reiner and Werner Jammernegg Improving Business Processes with Rapid Modeling: the Case of Digger . 77 Reinhold Schodl, Nathan Kunz, Gerald Reiner and Gil Gomes dos Santos Part III Rapid Modelling in Services Quick Response Service: The Case of a Non-Profit Humanitarian Service Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Arda Alp, Gerald Reiner and Jeffrey S. Petty
vii
viii
Contents
Applying Operations Management Principles on Optimisation of Scientific Computing Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Ari-Pekka Hameri and Tapio Niemi Increasing Customer Satisfaction in Queuing Systems with Rapid Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 No´emi Kall´o and Tam´as Koltai Rapid Modelling of Patient Flow in a Health Care Setting: Integrating Simulation with Lean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Claire Worthington, Stewart Robinson, Nicola Burgess and Zoe Radnor Part IV Rapid Modelling and Financial Performance Measurement Evaluation of the Dynamic Impacts of Lead Time Reduction on Finance Based on Open Queueing Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Dominik Gl¨aßer, Boualem Rabta, Gerald Reiner and Arda Alp The Financial Impact of a Rapid Modeling Issue: the Case of Lot Sizing . 163 Lien G. Perdu and Nico J. Vandaele Part V Product and Process Development A Flexibility Based Rapid Response Model in Ready to Wear Sector, in Turkey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 M¨ujde Erol Genevois and Deniz Yensarfati Modular Product Architecture: The Role of Information Exchange for Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 AHM Shamsuzzoha and Petri T. Helo Part VI Supply Chain Management The Impact of Technological Change and OIPs on Lead Time Reduction . 215 Krisztina Demeter and Zsolt Matyusz Global Supply Chain Management and Delivery Performance: a Contingent Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Ruggero Golini and Matteo Kalchschmidt In-Transit Distribution Strategy: Hope for European Factories? . . . . . . . . 249 Per Hilletofth, Frida Claesson and Olli-Pekka Hilmola Effect of component interdependency on inventory allocation . . . . . . . . . . 263 Yohanes Kristianto Nugroho, AHM Shamsuzzoha and Petri T. Helo
Contents
ix
Dynamic Nature and Long-Term Effect of Events on Supply Chain Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Harri Lorentz and Olli-Pekka Hilmola Evaluation of Supply Process Improvements Illustrated by Means of a JIS Supply Process from the Automotive Industry . . . . . . . . . . . . . . . . . . . . 289 Gerald Reiner and Martin Poiger Information Needs for Decisions on Supply Chain Design . . . . . . . . . . . . . . 303 Stefan Seuring and Tino Bauer A Conceptual Framework for the Integration of Transportation Management Systems and Carbon Calculators . . . . . . . . . . . . . . . . . . . . . . . 317 Stefan Treitl, Heidrun Rosiˇc and Werner Jammernegg A Conceptual Framework for the Analysis of Supply Chain Risk . . . . . . . 331 Monika Weish¨aupl and Werner Jammernegg A
International Scientific Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
B
Sponsors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
List of Contributors
Karim Abbas Laboratory LAMOS, University of B´ejaia, Compus of Targa Ouzemour, 06000 B´ejaia, Algeria, e-mail:
[email protected] Sma¨ıl Adjabi Laboratory LAMOS, University of B´ejaia, Targa Ouzemour, 6000 B´ejaia, Algeria e-mail:
[email protected] Djamil A¨ıssani Laboratory LAMOS, University of B´ejaia, Targa Ouzemour, 6000 B´ejaia, Algeria e-mail: lamos bejaia.hotmail.com Arda ALP Enterprise Institute, University of Neuchˆatel, Rue A.L. Breguet 1, CH-2000 Neuchˆatel, Switzerland e-mail:
[email protected] Tino Bauer FTI Consulting Deutschland GmbH, Maximilianstrasse 54, 80538 Muenchen, Germany e-mail:
[email protected] Nicola Burgess Warwick Business School, University of Warwick, Coventry, CV4 7AL, UK, e-mail:
[email protected] Mouloud Cherfaoui Laboratory LAMOS, University of B´ejaia, Targa Ouzemour, 6000 B´ejaia, Algeria e-mail:
[email protected] Frida Claesson School of Technology and Society, University of Sk¨ovde, 541 28 Sk¨ovde, Sweden e-mail:
[email protected] xi
xii
List of Contributors
Suzanne de Treville University of Lausanne, Faculty of Business and Economics, Internef 315, CH-1015 Lausanne, Switzerland e-mail:
[email protected] Krisztina Demeter Department of Logistics and Supply Chain Management, Corvinus University of Budapest, Fovam ter 8, H-1093 Budapest, Hungary e-mail:
[email protected] M¨ujde Erol Genevois Industrial Engineering Department, Galatasaray University, Ciragan Cad. No: 36 Ortakoy, Istanbul, Turkey e-mail:
[email protected] Dominik Gl¨aßer Institut de l’entreprise, Universit´e de Neuchˆatel, Rue A.-L. Breguet 1, CH-2000 Neuchˆatel, Switzerland e-mail:
[email protected] Ruggero Golini Department of Economics and Technology Management, Universit`a degli Studi di Bergamo, Viale Marconi 5, 24044 Dalmine (BG), Italy e-mail:
[email protected] Gil Gomes dos Santos Enterprise Institute, Faculty of Economics, University of Neuchˆatel, Avenue A.-L. Breguet 1, 2000 Neuchˆatel, Switzerland, e-mail:
[email protected] Ari-Pekka Hameri Ecole des HEC, University of Lausanne, Internef, Lausanne 1015, Switzerland e-mail:
[email protected] Petri T. Helo Department of Production, University of Vaasa, Finland e-mail:
[email protected] Per Hilletofth Logistic Research Group, University of Sk¨ovde, 541 28 Sk¨ovde, Sweden e-mail:
[email protected] Olli-Pekka Hilmola Lappeenranta Univ. of Tech., Kouvola Unit, Prikaatintie 9, 45100 Kouvola, Finland Safia Hocine Department Operational Research, University of B´ejaia, Targa Ouzemour, 6000 B´ejaia, Algeria e-mail: lamos
[email protected]
List of Contributors
xiii
Werner Jammernegg Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna, Austria e-mail:
[email protected] Matteo Kalchschmidt Department of Economics and Technology Management, Universit`a di Bergamo, Viale Marconi 5, 24044 Dalmine, Italy e-mail:
[email protected] No´emi Kall´o Department of Management and Corporate Economics, Budapest University of Technology and Economics, M¨uegyetem rkp. 9. T. e´ p. IV. em., 1111 Budapest, Hungary e-mail:
[email protected] Tam´as Koltai Department of Management and Corporate Economics, Budapest University of Technology and Economics, M¨uegyetem rkp. 9. T. e´ p. IV. em., 1111 Budapest, Hungary e-mail:
[email protected] Ananth Krishnamurthy University of Wisconsin-Madison, Department of Industrial and Systems Engineering, 1513 University Avenue, Madison, WI 53706, USA e-mail:
[email protected] Yohanes Kristianto Nugroho Department of Production, University of Vaasa, Finland e-mail:
[email protected] Nathan Kunz Enterprise Institute, Faculty of Economics, University of Neuchˆatel, Avenue A.-L. Breguet 1, 2000 Neuchˆatel, Switzerland, e-mail:
[email protected] Harri Lorentz Turku School of Economics, Finland e-mail:
[email protected] D´avid Losonci Department of Logistics and Supply Chain Management, Corvinus University of Budapest, Fovam ter 8, H-1093 Budapest, Hungary e-mail: (
[email protected] Zsolt Matyusz Department of Logistics and Supply Chain Management, Corvinus University of Budapest, Fovam ter 8, H-1093 Budapest, Hungary e-mail:
[email protected]
xiv
List of Contributors
Tapio Niemi Helsinki Institute of Physics, CERN, CH-1211 Geneva, Switzerland e-mail:
[email protected] Lien G. Perdu Dept of Business and Economics, K.U. Leuven, Naamsestraat 69, BE-3000 Leuven, Belgium e-mail:
[email protected] Jeffrey S. Petty Lancer Callon Ltd., Suite 298, 56 Gloucester Road, UK-SW7 4UB London, United Kingdom e-mail:
[email protected] Martin Poiger University of Applied Sciences BFI Vienna, Wohlmutstrasse 22, A-1020 Wien, Austria e-mail:
[email protected] Boualem Rabta Enterprise Institute, University of Neuchatel, Rue A.-L. Breguet 1, CH-2000 Neuchatel, Switzerland e-mail:
[email protected] Zoe Radnor Warwick Business School, University of Warwick, Coventry, CV4 7AL, UK e-mail:
[email protected] Fazia Rahmoune LAMOS Laboratory of Modelling and Optimization of Systems - University of Bejaia 06000, Algeria e-mail:
[email protected] Gerald Reiner Institut de l’entreprise, Universit´e de Neuchˆatel, Rue A.-L. Breguet 1, CH-2000 Neuchˆatel, Switzerland e-mail:
[email protected] Stewart Robinson Warwick Business School, University of Warwick, Coventry, CV4 7AL, UK e-mail:
[email protected] Heidrun Rosiˇc Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna, Austria e-mail:
[email protected]
List of Contributors
xv
Reinhold Schodl Institut de l’entreprise, Universit´e de Neuchˆatel, Rue A.-L. Breguet 1, CH-2000 Neuchˆatel, Switzerland e-mail:
[email protected] Divya Seethapathy Department of Industrial and Systems Engineering, University of Wisconsin, Madison, WI 53706, USA e-mail:
[email protected] Stefan Seuring University of Kassel, Department of International Management, Steinstr. 19, 37213 Witzenhausen, Germany e-mail:
[email protected] AHM Shamsuzzoha Department of Production, University of Vaasa, Finland e-mail:
[email protected] Stefan Treitl WU Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna, Austria e-mail:
[email protected] Nico J. Vandaele Research Center for Operations Management, Department of Decision Sciences and Information Management, K.U. 3000 Leuven, Belgium e-mail:
[email protected] Monika Weish¨aupl WU Vienna University of Economics and Business, Nordbergstr. 15, 1090 Wien, Austria e-mail:
[email protected] Claire Worthington Warwick Business School, University of Warwick, Coventry, CV4 7AL, UK e-mail:
[email protected] Deniz Yensarfati Industrial Engineering Department, Galatasaray University, Ciragan Cad. No: 36 Ortakoy, Istanbul, Turkey e-mail:
[email protected] Nadira Zareb ˜ Targa Ouzemour, 6000 Department Operational Research, University of BejaA¯a, B´ejaia, Algeria e-mail: lamos
[email protected]
Part I
Queueing Theory
Perturbation Analysis of M/M/1 Queue Karim Abbas and Djamil A¨ıssani
Abstract This paper treats the problem of evaluating the sensitivity of performance measures to changes in system parameters for a specific class of stochastic models. Motivated by the problem of the coexistence on transmission links of telecommunication networks of elastic and unresponsive traffic, we study in this paper the impact on the stationary characteristics of an M/M/1 queue of a small perturbation in the server rate. For this model we obtain a new perturbation bound by using the Strong Stability Approach. Our analysis is based on bounding the distance of stationary distributions in a suitable functional space.
1 Introduction A manufacturing process or a telecommunication network is a dynamical system which can, in principle, be described by using a rapid modelling technique such as queueing theory. In particular, the usefulness of the M/M/1 queueing model is multiplied many times if the model approximates the behaviour of queues that deviate slightly from the assumptions in the model. An analysis of the sensitivity of performance measures to changes in model parameters is an issue of practical importance. The study of this queueing model is motivated by the following engineering problem: Consider a transmission link of a telecommunication network carrying elastic traffic, able to adapt to the congestion level of the network, and a small proportion of traffic, which is unresponsive to congestion. The problem addressed in this paper
Karim Abbas (B) and Djamil A¨ıssani Laboratory LAMOS, University of B´ejaia, Compus of Targa Ouzemour, 06000 B´ejaia, Algeria, e-mail:
[email protected] Djamil A¨ıssani e-mail: lamos
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 1,
3
4
Karim Abbas and Djamil A¨ıssani
is to derive quantitative results for estimating the influence of unresponsive traffic on elastic traffic. Suppose, for example, we wish to model the behaviour of a single server queue with an infinite waiting room and a first-in first-out discipline. Let P denote the transition kernel of the imbedded jump chain of the M/M/1 queue with arrival rate λ and service rate μ . This Markov chain, defined on some (denumerable) state-space S, has unique stationary distribution π . What would be the effect on the stationary performance of the queue if we increased the service rate of the server by ε ? Let P denote the Markov transition kernel of the Markov chain modeling the alternative system, in our example the M/M/1 queue with service rate μ + ε , and assume that P has unique stationary distribution π. The question about the effect of switching from P to P on the stationary behavior is expressed by π − π, the difference between the stationary distributions. Obviously, a bound on the effect of the perturbation is of great interest. More specifically, let · tv denote the total variation norm, then the above problem can be phrased as follows: Can π − πtv be approximated or tv ? This is known as ”perturbation analysis” or ”stabounded in terms of P − P bility problem” of Markov chains in the literature. However, convergence w.r.t. to the total variational norm allows for only bounding the effect of switching from P to P for bounded performance measures only. There exists numerous results on perturbation bounds of Markov chains. General results are summarized by Heidergott and Hordijk (2003). One group of results concerns the sensitivity of the stationary distribution of a finite, homogeneous Markov chain (see Heidergott et al, 2007), and the bounds are derived using methods of matrix analysis; see the review of Cho and Meyer (2001) and recent papers of Kirkland (2002) and Neumann and Xu (2004). Another group includes perturbation bounds for finite-time and invariant distributions of Markov chains with general state space; see Anisimov (1988), Rachev (1989), A¨ıssani and Kartashov (1983), Kartashov (1996) and Mitrophanov (2005). In these works, the bounds for general Markov chains are expressed in terms of ergodicity coefficients of the iterated transition kernel, which are difficult to compute for infinite state spaces. These results were obtained using operator-theoretic and probabilistic methods. Recent work examines the robustness of queueing models in a general framework. The sensitivity of the queue to a perturbation is measured by various functions (metrics) of the probability distributions associated with the perturbed and nominal queueing processes. These analyses can be found in Kotzurek and Stoyan (1976), Whitt (1981), Zolotarev (1977), Fricker et al (2009) and Altman et al (2004). Related work on the robustness of statistical models can be found in Albin (1982). In Albin (1984), the author has examined the robustness of the M/M/1 queueing model to several specific perturbations in the arrival process. The author has used the Taylor series expansion to predict accurately the operating characteristics of queues with arrival processes that are slightly different from the Poisson process. More recently, the M/M/1 queueing model with perturbations in the service process, has been studied via a perturbation analysis of a Markov chain by Heidergott (2008). In this paper, we consider the same perturbation introduced in Heidergott (2008), and obtain a new results by applying an other approach. Therefore, we set out to
Perturbation Analysis of M/M/1 Queue
5
explain our approach with the M/M/1 queue with service rate μ as P and the M/M/1 queue with service rate μ + ε as the P system, for ε sufficiently small. For these systems π and π are known and everything can be computed explicitly. This allows for evaluating the potential of our approach. The paper is organized as follows. Section 2 presents the Strong Stability Approach. Section 3 is devoted to establish the bound on the perturbation. Numerical examples are presented in Section 4. Eventually, we will point out directions of further research.
2 Strong Stability Approach The main tool for our analysis is the weighted supremum norm, also called υ -norm, denoted by · υ , where υ is some vector with elements υ (s) > 1 for all s ∈ S, and for any w ∈ RS |w(i)| wυ = sup . i∈S υ (i) Let μ be a probability measure on S, then the υ -norm of μ is defined as μ υ = ∑ υ (s)μ (ds). s∈S
The υ -norm is extended to operators on S in the following way: let A ∈ RS×S then Aυ =
∑Sj=1 |A(i, j)w( j)| . υ (i) i,wυ ≤1 sup
Note that υ -norm convergence to 0 implies elementwise convergence to 0. Suppose that π and π have finite υ -norm, then |π f − π f | ≤ π − π υ f υ inf υ (s), s∈S
for all f with f υ finite. For our analysis, we will have S ⊂ N and we will assume that υ (s) is of particular form υβ (s) = β s , for β > 1. Hence, the bound becomes |π f − π f | ≤ π − π υβ f υβ ,
(1)
for all f such that | f (s)| ≤ cβ s for some finite number c. Denote the stationary distribution of P by π (and the stationary projector of P by Π ) and denote the stationary distribution of P by π (and the stationary operator of ). Let P by Π D = ∑ (Pn − Π ). n≥0
Elementary calculation shows
6
Karim Abbas and Djamil A¨ıssani
(I − P)D = I − Π . and using the fact that it holds that Π Π = Π yields Multiplying this equation by Π −Π. (I − P)D = Π Π =Π P, we obtain Using that Π = Π +Π (P − P)D. Π
(2)
yields Inserting the righthand side of the above expression repeatedly for Π =Π Π
∞
∑ ((P − P)D)n.
(3)
n≥0
Switching from matrix to vector notation we arrive at the following basic series expansion
π = π
∞
∑ ((P − P)D)n .
(4)
n≥0
We say that the Markov chain X with transition kernel P verifying Pυ < ∞ and invariant measure π is strongly υ -stable, if every stochastic transition kernel P in some neighborhood {P : P − Pυ < ε } admits a unique invariant measure π such that π − π υ tends to zero as P − Pυ tends to zero. The key criterion of strong stability of a Markov chain X is the existence of a deficient version of P defined in the following. Let X be a Markov chain with the transition kernel P and invariant measure π . We call a deficient Markov kernel T a residual for P with respect to · υ if there exists a measure σ and a nonnegative measurable function h on N satisfying the following conditions: a. b. c. d.
π h > 0, σ 1 = 1, σ h > 0, and the operator T = P − h ◦ σ is nonnegative, the norm of the operator T is strictly less than one, i.e., T υ < 1, Pυ < ∞,
where ◦ denotes the convolution between a measure and a function and 1 is the vector having all the components equal to 1. It has been shown in A¨ıssani and Kartashov (1983) that a Markov chain X with the transition kernel P is strongly stable with respect to υ if and only if a residual for P with respect to υ exists. Although the strong stability approach originates from stability theory of Markov chains, the techniques developed for the strong stability approach allow to establish numerical algorithms for bounding π − π υ (Abbas and A¨ıssani, 2010a,b; Bouallouche-Medjkoune and A¨ıssani, 2006; Rabta and A¨ıssani, 2005). To see this, revisit the series expansion in (4). The following equality has been established independently by Kartashov (1986) and Hordijk and Spieksma (1994):
Perturbation Analysis of M/M/1 Queue
7
D = (I − Π ) ∑ T n (I − Π ).
(5)
n≥0
It is worth noting that for the above relation to hold it is necessary that T is a residual with respect to P. Note that (P−P)(I − Π ) = (P−P) and multiplying (5) by (P−P) yields (P − P)D = (P − P) ∑ T n (I − Π ). n≥0
Inserting the above into (3) yields
π − π = π
∞
n
∑
(P − P)
n≥1
∑T
m
(I − Π )
.
(6)
m≥0
Based on this series expansion, a bound on π − π υ is established in the following theorem. Theorem 0.1. (Kartashov, 1986) Let P be strongly stable. If P − Pυ <
1 − T υ I − Π υ
then, we the following bound holds π − π υ ≤ π υ
I − Π υ P − Pυ . 1 − T υ − I − Π υ P − Pυ
Proof. Under the assumptions of the theorem, it holds that m n (P − P) ∑ T (I − Π ) ≤ (P − P)υ ∑ T I − Π υ n≥0 m≥0 υ
υ
I − Π υ ≤ (P − P)υ , 1−ρ
where we use the fact that strong stability implies that T υ < 1. Provided that (P − P)υ <
1 − T υ , I − Π υ
the series in (6) converges in υ -norm sense and we obtain with the help of the above inequality that n ∞ 1 − T υ m , ∑ (P − P) ∑ T (I − Π ) ≤ n≥0 1 − T υ − (P − P)υ I − Π υ m≥0 υ which proves the claim.
8
Karim Abbas and Djamil A¨ıssani
Note that the term I − Π υ in the bound provided in Theorem 0.1 can be bounded by I − Π υ ≤ 1 + 1υ π υ .
3 Analysis of the Model We first consider the M/M/1 queue with arrival rate λ and service rate μ . Let ρ = λ /μ . Markov kernel P is then given by ⎧ λ ⎪ λ +μ j = i + 1, ⎪ ⎪ ⎪ ⎨ μ Pi j = λ + μ j = i − 1, i > 0, ⎪ ⎪ ⎪ ⎪ ⎩ 0 otherwise, and P00 = 1 − λ /(λ + μ ). The kernel P of the M/M/1 queue with the service rate μ + ε , such that μ + ε > λ , is given by ⎧ λ ⎪ ⎪ λ +μ +ε j = i + 1, ⎪ ⎪ ⎪ ⎨ μ +ε Pi j = λ + μ +ε j = i − 1, i > 0, ⎪ ⎪ ⎪ ⎪ ⎪ ⎩0 otherwise, and P00 = 1 − λ /(λ + μ + ε ). In the following we will derive bounds for the effect on the stationary distribution of the queue length in the M/M/1 queue when we increased the service rate. For our bounds, we require bounds on the basic input entities such as π and T . In order to establish those bounds, we have to specify υ . Specifically, for β > 1, we will choose υβ (s) = β s , s ∈ S, as our norm-defining mapping. For ease of reference, we introduce the following condition: (C)
ρ <1
and
1<β <
1 . ρ
Essential for our numerical bound on the deviation between stationary distributions π and π is a bound on the deviation of the transition kernel P from P. This bound is provided in the following lemma. Lemma 0.1. If condition (C) is satisfied, then
Perturbation Analysis of M/M/1 Queue
P − Pυβ ≤
9
λ (1 + β )ε = Δ (β ). (λ + μ )(λ + μ + ε )
Proof. By definition, we have 1 k≥0 υ (k)
P − Pυβ = sup
∑ υ ( j)|Pk j − Pk j |.
j≥0
For k = 0:
Δ0 =
∑ υ ( j)|Pk j − Pk j |
j≥0
= υ (0)|P00 − P00 | + υ (1)|P01 − P01 | λ (1 + β )ε . = (λ + μ )(λ + μ + ε ) For k ≥ 1:
Δ1 = sup k≥1
1 βk
∑ β j |Pk j − Pk j |
j≥0
λ λ μ k−1 μ + ε +β − λ + μ +ε − λ + μ λ +μ +ε λ +μ
ε λ = λβ + (λ + μ )(λ + μ + ε ) β
1 = sup k k≥1 β
k+1 β
Note that P − Pυβ = max{Δ0 , Δ1 }, and from Δ1 < Δ0 , it follows that P − Pυβ ≤ Δ0 = Δ (β ). In the following lemma we will identify the range for β that leads to finite υβ norm of π . Lemma 0.2. Provided that (C) holds, the υβ -norm of π is bounded by π υβ =
1−ρ = c0 (β ) < ∞. 1 − ρβ
Proof. The stationary distribution of P is known to be equal to π (i) = ρ i (1 − ρ ). Hence, ∞
π υβ = (1 − ρ ) ∑ β i ρ i , i=0
which is finite if β ρ < 1. Note that ρ < 1 is assumed for stability and β > 1 is assumed in the definition of υβ .
10
Karim Abbas and Djamil A¨ıssani
Lemma 0.3. If condition (C) holds, then I − Π υ ≤ Proof. We have
2 − ρ (β + 1) = c(β ). 1 − ρβ
I − Π υ ≤ 1 + 1υ π υ .
By definition, we obtain 1υ = 1, and 1υ π v = c0 (β ). Therefore, I − Π υ ≤ 1 + c0 (β ) = c(β ). Let T denote the taboo Markov kernel for taboo state zero, that is, T is a deficient Markov kernel that avoids jumps to state zero; more specifically, for i, j let ⎧ ⎨ 0, if i = 0, (7) Ti j = ⎩ Pi j , otherwise. Lemma 0.4. Provided that condition (C) holds, it hold that T υβ = τ (β ) < 1. Proof. For k = 0: T υ (0) =
∑ υ ( j)T0 j = ∑ β j × 0 = 0.
j≥0
j≥0
For k ≥ 1: T υ (k) =
∑ υ ( j)Pk j
j≥0
λ μ υ (k + 1) × + υ (k − 1) × λ +μ λ +μ
μ λ 1 + = βk β λ +μ β λ +μ = τ (β ) × υ (k), =
(8) (9)
with τ (β ) = (λ β /(λ + μ )) + (μ /β (λ + μ )). Thus, T υ = τ (β ), and it follows that the υ -norm of T is equal to τ (β ). Provided that ρβ < 1, it holds that
τ (β ) =
1 + ρβ 2 , (1 + ρ )β
Perturbation Analysis of M/M/1 Queue
11
which yields τ (β ) < 1. To summarize,
ρβ < 1 ⇒ T υ = τ (β ) < 1, for υ (k) = β k . which proves the claim. Lemma 0.5. Provided that condition (C) holds, Pυβ is finite. Proof. Let H denote the matrix with row zero equal to P and all other rows equal to zero. Then, T = P − H ⇒ P = T + H ⇒Pυβ ≤ T υβ + Hυβ . By Lemma 0.4, it holds that T υβ < ∞ and for the proof of the claim it suffices to show that Hυβ < ∞, which follows from Hυβ = Let
μ
λ
∑ β j P0 j = λ + μ + β λ + μ < ∞.
j≥0
B = sup {β > 1 : τ (β ) < 1} .
Theorem 0.2. Let ρ < 1. For β ∈ B, the deficient Markov kernel T defined in (7) is a residual of the Markov chain P with respect to υβ for provided that τ (β ) < 1. Proof. Let h(i) = 1i=0 = choose σ to be
⎧ ⎨ 1, if i = 0; ⎩
0, otherwise,
⎧ μ ⎪ λ +μ , if j = 0; ⎪ ⎪ ⎪ ⎨ λ σ j = P0 j = λ + μ , if j = 1; ⎪ ⎪ ⎪ ⎪ ⎩ 0, otherwise.
In the following we will show that conditions a, b, c, and d hold for T defined in (7). We start by verifying condition a and b. With the above definitions it holds that
π h = π0 = (1 − ρ ) > 0, and μ λ σ1 = λ+ μ + λ +μ + 0 = 1, here 1 is the vector having all the components equal to 1,
μ μ λ σh = λ+ + 0 × μ λ +μ + 0 = λ +μ > 0. In the same way, Ti j = Pi j − h(i)σ j =
⎧ ⎨ P0 j − σ j = P0 j − P0 j = 0, if i = 0; ⎩
Pi j − (0 × σ j ) = Pi j ,
otherwise,
12
Karim Abbas and Djamil A¨ıssani
and from the definition of the kernel (Pi j )i j , it is obvious that (Ti j )i j ≥ 0. We not turn to condition c. In the following we will provide a sufficient condition such that T υ < 1 for υ (k) = β k , for all k ≥ 0, with β > 1. By Theorem 0.2, the general bound provided Theorem 0.1 can be applied to the Markov kernels P and P for our M/M/1 queue. Specifically, we will insert the individual bounds provided in Lemma 0.1, Lemma 0.2, Lemma 0.3 and Lemma 0.4, which yields the following result. Theorem 0.3. Let ρ < 1. For β ∈ B such that
Δ (β ) < it holds that π − π υβ ≤
1 − τ (β ) , c(β )
c0 (β )c(β ) Δ (β ) = SSB(β ). 1 − τ (β ) − c(β ) Δ (β )
Proof. Note that β ∈ B already implies c0 (β ) < ∞ and τ (β ) < 1. Hence, Lemma 0.2 and Lemma 0.4 apply. Following the line of thought put forward in Section 2, see (1), we will translate the norm bound in Theorem 0.3 to bounds for individual performance measures f . Corollary 0.1. Under the conditions put forward in Theorem 0.3, it holds for any f such that f υβ < ∞ that |π f − π f | ≤ f υβ × SSB(β ) = h(ε , β ). Note that the bound in Corollary 0.1 has β as a free parameter. This give the opportunity to minimize the right hand side of the inequality in Corollary 0.1 with respect to β . For given ρ , this leads to the following optimization problem. min h(ε , β ) β ∈B
s.t.
Δ (β ) <
1 − τ (β ) . c(β )
By inserting ε > 0 small, all inequalities can be made strict and in the above optimization problem can be solved using any standard technique.
4 Numerical Examples In this section we will apply our bounds put forward in Theorem 0.3 and Corollary 0.1. For the numerical examples we set μ = 1. We will discuss the following three cases in detail: the light traffic case ρ = 0.1, the medium traffic case ρ = 0.6,
Perturbation Analysis of M/M/1 Queue
13
and the heavy traffic case ρ = 0.9. To illustrate the application of Corollary 0.1 to a particular performance function, we take f (s) = s the identical mapping. In words, we are interested in the effect of perturbing the service rate by ε on the mean queue length. It is worth noting that in this case f υβ =
1 − 1 β ln(β ) . ln(β )
The light traffic case: We let ρ = 0.1. For applying our bounds we compute the value for βopt that minimizes h(ε , β ). Then, we can compute the bounds put forward in Theorem 0.3 and Corollary 0.1 for various values for ε . The numerical results are presented in Table 1 Table 1 Errors comparative table for ρ = 0.1
ε
βopt
π − πβ |π f − π f | Bound True Bound True 0.01 2.6500 0.0021 5.0481e − 004 7.7964e − 004 1.9056e − 004 0.1 2.6000 0.0204 0.0049 0.0079 0.0019 1 3 0.2641 0.0539 0.0884 0.0180
The medium traffic case: We let ρ = 0.6. In a similar manner, the mapping h(ε , β ) is minimized at βopt . The perturbation ε = 1 is excluded for the medium traffic case. The numerical results are presented in Table 2, where the symbol ”x” indicates that our bounds are not applicable. Table 2 Errors comparative table for ρ = 0.6
ε
βopt
π − πβ |π f − π f | Bound True Bound True 0.01 1.2600 0.0945 0.0041 0.1504 0.0065 0.1 1.2000 1.6245 0.0294 3.2778 0.0594 1 x x x x x
The heavy traffic case: We let ρ = 0.9. The mapping h(ε , β ) is minimized at βopt . Like for the medium traffic case, ε = 1 is excluded. The numerical results are presented in Table 3, where the symbol ”x” indicates that our bounds are not applicable. Table 3 Errors comparative table for ρ = 0.9
ε
βopt
π − πβ |π f − π f | Bound True Bound True 0.01 1.0480 0.2783 0.0021 2.1837 0.0166 0.1 1.0500 2.2340 0.0090 16.8448 0.0680 1 x x x x x
14
Karim Abbas and Djamil A¨ıssani
From these numerical results, it is easy to see that, the values of our bounds increase as the value of perturbation’ parameter ε increases. Indeed, it is completely logical that the M/M/1 queue with service rate μ + ε is close to the M/M/1 queue with the same arrival flux and distribution of service time when ε tends to zero. Besides, we can notice the remarkable sensitivity of the Strong Stability Approach in the variation of the perturbation’ parameter ε with regard to the value of the true distance. The bound obtained by the Strong Stability Approach is much more refined than that obtained by the true distance when the parameter ε tends to zero.
5 Concluding Remarks The only input required from the perturbed queue is distance in υ -norm between P In this paper, we discussed an example, where P and P are Markov chains and P. and a bound on P − Pυ can be computed. On the other hand, as π˜ is known in closed form, one can compute π˜ − π υ directly and there is no imminent reason for applying our approach, other than the tutorial value of explaining the method for a simple example. As further research, we will show how to estimate P − Pυ from a single sample path of the G/G/1 queue. This is topic of further research.
References Abbas K, A¨ıssani D (2010a) Strong stability of the embedded Markov chain in an GI/M/1 queue with negative customers. Applied Mathematical Modelling 34(10):2806–2812 Abbas K, A¨ıssani D (2010b) Structural perturbation analysis of a single server queue with breakdowns. Stochastic Models 26(1):78–97 Albin S (1982) On Poisson approximations for superposition arrival processes in queues. Management Science 28(2):126–137 Albin SL (1984) Analyzing M/M/1 queues with perturbations in the arrival process. The Journal of the Operational Research Society 35(4):303–309 Altman E, Avrachenkov KE, N´un˜ ez-Queija R (2004) Perturbation analysis for denumerable Markov chains with application to queueing models. Advances in Applied Probability 36(3):839–853 Anisimov V (1988) Estimates for the deviations of the transition characteristics of nonhomogeneous Markov processes. Ukrainian Mathematical Journal 40(6):588–592 A¨ıssani D, Kartashov N (1983) Ergodicity and stability of Markov chains with respect to operator topology in the space of transition kernels. In: Doklady Akademii Nauk Ukrainskoi SSR 11, (seriya A), vol 11, pp 3–5 Bouallouche-Medjkoune L, A¨ıssani D (2006) Performance analysis approximation in a queueing system of type M/G/1. Mathematical Methods of Operations Research 63(2):341–356
Perturbation Analysis of M/M/1 Queue
15
Cho G, Meyer C (2001) Comparison of perturbation bounds for the stationary distribution of a Markov chain. Linear Algebra and its Applications 335(1–3):137–150 Fricker C, Guillemin F, Robert P (2009) Perturbation analysis of an M/M/1 queue in a diffusion random environment. Queueing Systems: Theory and Applications 61(1):1–35 Heidergott B (2008) Perturbation analysis of Markov chains. In: Proceedings of the International Workshop on DES, G¨oteborg, Sweden Heidergott B, Hordijk A (2003) Taylor series expansions for stationary Markov chains. Advances in Applied Probability 35(4):1046–1070 Heidergott B, Hordijk A, van Uitert M (2007) Series expansions for finite-state Markov chains. Probability in the Engineering and Informational Sciences 21(03):381–400 Hordijk A, Spieksma F (1994) A new formula for the deviation matrix. In: Kelly F (ed) Probability, Statistics and Optimization, Wiley Kartashov N (1986) Strongly stable Markov chains. Journal of Soviet Mathematics 34:1493–1498 Kartashov N (1996) Strong stable Markov chains. VSP/TbiMC, Utrecht/Kiev, Utrecht/Kiev Kirkland S (2002) On a question concerning condition numbers for Markov chains. SIAM Journal on Matrix Analysis & Applications 23(4):1109–1119 Kotzurek M, Stoyan D (1976) A quantitative continuity theorem for mean stationary waiting time in GI/G/I. Math Operations forsch Statist 7:595–599 Mitrophanov AY (2005) Sensitivity and convergence of uniformly ergodic Markov chains. Journal of Applied Probability 42(4):1003–1014 Neumann M, Xu J (2004) Improved bounds for a condition number for Markov chains. Linear Algebra and its Applications 386:225–241, special Issue on the Conference on the Numerical Solution of Markov Chains 2003 Rabta B, A¨ıssani D (2005) Strong stability in an (R,s,S) inventory model. International Journal of Production Economics 97(2):159–171 Rachev S (1989) The problem of stability in queueing theory. Queueing Systems 4:287–318 Whitt W (1981) Quantitative continuity results for the GI/G/1 queue. Tech. rep., Bell Laboratories Report Zolotarev V (1977) General problems of the stability of mathematical models. Proceedings of the session the 41st International Statistical Institute: New Delhi
Series Expansions in Queues with Server Vacation Fazia Rahmoune and Djamil A¨ıssani
Abstract This paper provides series expansions of the stationary distribution of finite Markov chains. The work presented is a part of research project on numerical algorithms based on series expansions of Markov chains with finite state-space S. We are interested in the performance of a stochastic system when some of its parameters or characteristics are changed. This leads to an efficient numerical algorithm for computing the stationary distribution. Numerical examples are given to illustrate the performance of the algorithm, while numerical bounds are provided for quantities from some models like manufacturing systems to optimize the requirement policy or reliability models to optimize the preventive maintenance policy after modelling by vacation queuing systems.
1 Introduction Let P denote the transition kernel of a Markov chain defined on a finite state-space S having unique stationary distribution πP . Let Q denote the Markov transition kernel of the Markov chain modeling the alternative system and assume that Q has unique stationary distribution πQ . The question about the effect of switching from P to Q on the stationary behavior is expressed by πP − πQ , the difference between the stationary distributions (Heidergott and Hordijk, 2003). In this work, we show that the performance measure of some stochastic models, which are gouverned by a finite Markov chain, can be obtained from other performance of more simple models, via series expansion method. Let . tv denote the total variation norm, then the above problem can be phrased as follows: Can πP − πQ tv be approximated or bounded Fazia Rahmoune (B) and Djamil A¨ıssani LAMOS Laboratory of Modelling and Optimization of Systems - University of Bejaia 06000, Algeria, e-mail:
[email protected] Djamil A¨ıssani e-mail: lamos
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 2,
17
18
Fazia Rahmoune and Djamil A¨ıssani
in terms of P − Q tv ? This is known as perturbation analysis of Markov chains (PAMC) in the literature. This paper is considered as a continuity of the work (Rahmoune and A¨ıssani, 2008), where quantitative estimate of performance measure has been established via strong stability method for some vacation queueing models. In this work, we will show that πP − πQ can be arbitrarily closely approximated by a polynomial in (Q − P)DP , where DP denotes the deviation matrix associated with P. A precise definitions and notations will be given later. Starting point is the representation of πQ given by:
πQ =
k
∑ πP ((Q − P)DP )n + πQ ((Q − P)DP )k+1 ;
(1)
n=0
for any k ≥ 0. This series expansion of πQ provides the means of approximating πQ by Q and entities given via the P Markov chain only. We obtain a bound for the remainder term working with the weighted supremum norm, denoted by . v , where v is some vector with positive non-zero elements, and for any w ∈ RS w v = sup i∈S
w(i) , v(i)
(2)
see, for example (Meyn and Tweedie, 1993). We will show that for our models
πQ (s) −
k
∑ πP ((Q − P)DP)n
(s) ≤ d ((Q − P)DP ) k+1 v
n=0
for any k ∈ N and any s ∈ S, where v can be any vector satisfying v(s) ≥ 1 for s ∈ S, and d is some finite computable constant. In particular, the above error bound can be computed without knowledge of πQ . The key idea of the approach is to solve for all k the optimization problem
min ((Q − P)DP )k v , subject to v(s) ≥ 1, for s ∈ S.
(3)
The vector v thus yields the optimal measure of the rate of convergence of the series in (1). Moreover, the series in (1) tends to converge extremely fast which is due to the fact that in many examples v be found such that ((Q−P)DP )k v << 1. The limit of the series (1) first appeared in (Cao, 1998), however, neither upper bounds for the remainder term nor numerical examples were given there. The derivation of this has been done in (Heidergott and Hordijk, 2003), which is a generalization of (Cao, 1998). The use of series expansion for computational purposes is not new. It has been used in the field of linear algebra (Cho and Meyer, 2001). The work presented in this paper is part of research project on numerical algorithms based on series expansions of Markov chains as it was in Heidergott and Hordijk
Series Expansions in Queues with Server Vacation
19
(2003). The present paper establishes the main theoretical results. In particular, numerical examples are provided for vacation queueing systems.
2 Preliminaries on Finite Markov chains Let S denote a finite set {1, · · · , S}, with 0 < S < ∞ elements. We consider Markov kernels on state space S, where the Markov kernel Pn is simply obtained from taking the nth power of P. Provided it exists, we denote the unique stationary distribution of P by πP and its ergodic projector by ΠP . For simplicity, we identify πP and πQ with ΠP and ΠP , respectively. Throughout the paper, we assume that P is aperiodic and unichain, which means that there is one closed irreducible set of states and a set of transient states. Let |A|(i; j) denote the (i; j)th element of the matrix of absolute values of A ∈ RS×S , and additionally we use the notation |A| for the matrix of absolute values of A. The main tool for this analysis is the v-norm, as defined in (2). For a matrix A ∈ RS×S the v-norm is given by S
de f
A v =
∑ |A(i, j)w( j)|
sup i,wv ≤1
j=1
v(i)
Next we introduce v-geometric ergodicity of P, see Meyn and Tweedie (1993) for details. Definition 0.1. A Markov chain P is v-geometric ergodic if c < ∞, β < 1 and N < ∞ exist such that Pn − ΠP v ≤ cβ n , for all n ≥ N. The following lemma shows that any finite-state aperiodic Markov chain is vgeometric ergodic. Lemma 0.1. For finite-state and aperiodic P a finite number N exists such that Pn − ΠP v ≤ cβ n , for all n ≥ N; where c < ∞ and β < 1. Proof. Because of the finite state space and aperiodicity.
3 Series Expansions in Queues with Server Vacation We are interested in the performance of a queuing system with single vacation of the server when some of its parameters or characteristics are changed. The system as
20
Fazia Rahmoune and Djamil A¨ıssani
given is modeled as a Markov chain with kernel P, the changed system with kernel Q. We assume that both Markov chains have a common finite state space S. We assume too, as indicated earlier, that both Markov kernels are aperiodic and unichain. The goal of this section is to obtain the stationary distribution of Q, denoted by πQ , via a series expansion in P. In the next section, we comment on the speed of convergence of this series. We summarize our results in an algorithm, presented in an other section. Finally, we illustrate our approach with numerical examples.
3.1 Description of the models Let us consider the M/G/1//N queueing systems with multiples vacations of the server modelling the reliability system with multiple preventives maintenances. We suppose that there is on the whole N machines in the system. Our system consists of a source and a waiting system (queue + service). Each machine is either in source or in the waiting system at any time. A machine in source arrives at the waiting system precisely, with the durations of inter-failure exponentially distributed with parameter λ ) for repairment (corrective maintenance). The distribution of the service time is general with a distribution function B(.) and mean b. The repairmen take maintenance period at each time the system is empty. If the server returns from maintenance finding the queue impty, he takes an other maintenance period (mulitiple maintenance). In addition, let us consider the M/G/1//N queuing system with unique vacation of the server modelling the reliability system with periodic prenventives maintenances, having the same distributions of the inter-arrivals and the repair time previously described. In this model, the server (repairman) will wait until the end of the next activity period during which at least a customer will be served, before beginning another maintenance period. In other words, there is exactly only one maintenance at the end of each activity period at each time when the queue becomes empty (exhaustive service). If the server returns from maintenane finding the queue nonempty, then the maintenance period finishes for beginning another activity period. We also suppose that the maintenance times V of the server are independent and iid, with general distribution function noted V (x).
3.2 Transition Kernels Let Xn (resp. X¯n ) the imbedded Markov chains at the end moments of repair tn for the nth machine associated with the M/G/1//N system with multiple maintenance (resp.to the system with the unique maintenance). In the same way, we define the following probabilities: fk = P[k broken down machines at the end of the preventive maintenance period]
Series Expansions in Queues with Server Vacation
= CNk
∞ 0
(1 − e−λ t )k e−(N−k)λ t dV (t), k = 0, N.
for witch
α¯ k = and
αk =
21
(4)
f0 + f1 , for k = 1 for 2 ≤ k ≤ N. fk , fk for k = 1, N. 1 − f0
The one stage transition probabilities of the imbedded Markov chains Xn and X¯n allow us to describe the general expression of the transition kernels P = (Pi j )i j and Q = (Qi j )i j summarized below respectively. ⎧ j+1 ⎪ ⎪ ∑ Pj−k+1 αk , if i = 0, j = 0, N − 1, k = 1, N, ⎪ ⎨ k=1 Pi j = if 1 ≤ i ≤ j + 1 ≤ N − 1, Pj−i+1 ⎪ ⎪ ⎪ ⎩ 0 else.
Qi j =
⎧ j+1 ⎪ ⎪ ⎪ ⎨ ∑ Pj−k+1 α¯ k , if i = 0, j = 0, N − 1, k = 1, N, k=1
if 1 ≤ i ≤ j + 1 ≤ N − 1, else.
Pj−i+1 ⎪ ⎪ ⎪ ⎩ 0
Clearly, the Markov chain {X¯n }n∈N is irreducible, aperiodic with finite state space S = {0, 1, · · · , N − 1}. So, we can applied the main theorical results established in this paper to this model, in order to approach another Markov chain whose transition kernel is neighborhood of its transition kernel Q.
3.3 Series Development for πQ We write DP for the deviation matrix associated with P; in symbols: ∞
DP =
∑ (Pm − ΠP )
(5)
m=0
Note that DP is finite for any aperiodic finite-state Markov chain. Moreover, the deviation can be rewritten as DP =
∞
∑ (P − ΠP )m − ΠP ,
m=0 ∞
where ∑ (P − ΠP )m is often referred to as the group inverse, see for instance Cao m=0
(1998) or Coolen-Schrijner and van Doorn (2002). A general definition which is
22
Fazia Rahmoune and Djamil A¨ıssani
valid for periodic Markov chain, can be found in, e.g., Puterman (1994). Let P be unichain. Using the definition of DP , we obtain: (I − P)DP = I − ΠP . This is the Poisson equation in matrix format. Let the following equation:
ΠQ = ΠP
k
∑ ((Q − P)DP )n + ΠQ ((Q − P)DP)k+1 .
(6)
n=0
for k ≥ 0, where: de f
H(k) = ΠP
k
∑ ((Q − P)DP )n,
n=0
is called a series approximation of degree k for ΠQ , T (k), with de f
T (k) = ΠP ((Q − P)DP )k ,
(7)
denotes the kth element of H(k), and de f
R(k) = ΠQ ((Q − P)DP )k+1 ,
(8)
is called the remainder term (see Heidergott et al, 2007, for details). The quality of the approximation provided by H(k) is given through the remainder term R(k).
3.4 Series Convergence In this section we investigate the limiting behavior of H(k) as k tends to ∞. We first establish sufficient conditions for the existence of the series. Lemma 0.2. (Heidergott and Hordijk, 2003) The following assertions are equivalent: ∞
(i) The series ∑ ((Q − P)DP )k is convergent. k=0
(ii) There are N and δN ∈ (0, 1) such that ((Q − P)DP )N v < δN . (iii) There are κ and δ < 1 such that ((Q − P)DP )k v < κδ k for any k. (iv) There are N and δ ∈ (0; 1) such that ((Q − P)DP )k v < δ k for any k ≥ N. Proof. See Heidergott and Hordijk (2003). The fact that the maximal eigenvalue of |(Q − P)DP | is smaller than 1 is necessary ∞
for the convergence of the series ∑ ((Q − P)DP )k . k=0
Series Expansions in Queues with Server Vacation
23
Remark 0.1. Existence of the limit of H(k), see (i) in Lemma 0.3, is equivalent to an exponential decay in the v-norm of the elements of the series, see (iv) in Lemma 0.2. For practical purposes, one needs to identify the decay rate δ and the threshold value N after which the exponential decay occurs. The numerical experiments have shown that the condition (ii) in Lemma 0.2 is the most convenient to work with. More specifically, we work with the following condition (C) as in Heidergott and Hordijk (2003), which is similar to the geometric series convergence criterion. The Condition(C): There exists a finite number N such that we can find δN ∈ (0; 1) which satisfies: ((Q − P)DP )N v < δ N ; and we set de f
cvδN =
N−1 1 ∑ ((Q − P)DP )k v 1 − δN k=0
As shown in the following lemma, the factor cvδN in condition (C) allows to establish an upper bound for the remainder term that is independent of ΠQ . Lemma 0.3. Under (C) it holds that: (i) R(k − 1) v ≤ cvδN T (k) v for all k, ∞
(ii) lim H(k) = ΠP ∑ ((Q − P)DP )n = ΠQ k→∞
n=0
Proof. To proof the lemma it is sufficient to use the definition of the norm . v and the remainder term R(k − 1), using the condition (iv) of Lemma 0.2. Remark 0.2. An example where the series H(k) fails to converge is illustrated in Heidergott and Hordijk (2003). Remark 0.3. The series expansion for ΠQ put forward in the assertion (ii) in Lemma 1 is well known; see Cao (1998) and Kirkland (2003) for the case of finite Markov chains and Heidergott and Hordijk (2003) for the general case. It is however worth noting that in the aforementioned papers, the series was obtained via a differentiation approach, whereas the representation is derived in this paper from the elementary equation 6. Remark 0.4. Provided that det(I − (Q − P)DP ) = 0, one can obtain πQ from
πQ = ΠP (I − (Q − P)DP )−1
(9)
Moreover, provided that the limit lim H(k) = lim πP
k→∞
k→∞
∞
∑ ((Q − P)DP )n
n=0
exists (see Lemma 0.3 for sufficient conditions), it yields πQ as πP ∑∞ n=0 ((Q − P)DP )n .
24
Fazia Rahmoune and Djamil A¨ıssani
Remark 0.5. Note that a sufficient condition for (C) is (Q − P)DP v < δ , δ < 1.
(10)
In Altman et al (2004); Cho and Meyer (2001) it is even assumed that (Q − P)DP v < g1 ,
(11)
with g1 > 0 a finite constant, and D P v <
c , 1−β
(12)
with c > 0 and 0 < β < 1 finite constants. If g1 c < 1, 1−β
(13)
then (10) and hence (C) is clearly fulfilled. Hence, for numerical purposes these conditions are too strong.
3.5 The remainder term Bounds The quality of approximation by H(k − 1) is given by the remainder term R(k − 1) and in applications v should be chosen such that it minimizes cvδN T (k) v , thus minimizing our upper bound for the remainder term. For finding an optimal upper bound, since cvδN is independent of k, we focus on T (k). Specifically, we have to find a bounding vector v that minimizes T (k) v uniformly w.r.t. k. As the following theorem shows, the unit vector, denoted by 1, with all components equal to one, yields the minimal value for T (k) v for any k. Theorem 0.1. (Heidergott and Hordijk, 2003) The unit vector 1 minimizes T (k) v uniformly over k, i.e., ∀k ≥ 1 : in fv T (k) v = T (k) 1
(14)
Remark 0.6. It can be shown as for the results in Altman et al (2004) and Cho and cg1 Meyer (2001), that the smallest 1− β is precisely the maximal eigenvalue of |DP |. Again we note that often the product of these maximal eigenvalues is not smaller than 1. If this is the case, then according to Altman et al (2004) and Cho and Meyer (2001) we cannot decide whether the series H(k) converges to ΠQ . Hence, their condition is too restrictive for numerical purposes.
Series Expansions in Queues with Server Vacation
25
3.6 Algorithm In this section we describe a numerical approach to computing our upper bound for de f
the remainder term R(k). We search for N such that 1 > δN = ((Q − P)DP )N 1 , which implies that for N and δN , the condition (C) holds. Then the upper bound for R(k) is obtained from c1δN ((Q − P)DP )k+1 1 . Based on the above, the algorithm that yields an approximation for πQ with ε precision can be described, with two main parts. First c1δN is computed. Then, the series can be computed in an iterative way until a predefined level of precision is reached.
The Algorithm Chose precision ε > 0. Set k = 1, T (1) = ΠP (Q − P)DP and H(0) = ΠP . Step 1: Find N such that ((Q − P)DP )N 1 < 1. Set δN = ((Q − P)DP )N 1 and compute N−1 1 c1δN = ∑ ((Q − P)DP )k 1 . 1 − δN k=0 Step 2: If
cδ T (k) 1 < ε ,
the algorithm terminates and H(k − 1) yields the desired approximation. Otherwise, go to step 3. Step 3: Set H(k) = H(k − 1) + T (k). Set k := k + 1 and T (k) = T (k − 1)(Q − P)DP . Go to step 2. ∞
Remark 0.7. Algorithm 1 terminates in a finite number of steps, since ∑ ((Q − P)DP )k is finite, .
k=0
3.7 Numerical Application The present paper established the main theoretical results, and the analysis provided applies to the case of optimization of preventive maintenance in reparable reliability models. The application of this algorithm step by step gives us the following results. This part of the paper is reserved for theoretical and numerical results obtained via series expansion method to obtain the development of the stationary distribution of the M/G/1//N queueing models with single server vacation, witch modeless reliability system with preventive maintenance.
26
Fazia Rahmoune and Djamil A¨ıssani
Let S the state space of the imbedded Markov chains Xn and X¯n of the both considered queueing systems. Note that the both chains are irreducible and aperiodic, with finite state space S, so they are v-geometric ergodic. We note by DP the deviation matrix associated to X¯n chain, and by πP its stationary distribution, with stationary projector ΠP . In the same time, πQ is the stationary distribution of Xn , with the projector ΠQ . We want to express πQ in terms of puissance series on (P − Q)DP and πP as follows: ∞
πQ =
∑ πP ((Q − P)DP )n ;
(15)
n=0
We show that this series is convergent. In fact, since the state space of the both chains is finite, so we can give the first following elementary result: Lemma 0.4. Let Xn and X¯n the imbedded Markov chains of the M/G/1//N queueing system with server vacation and the classical M/G/1//Nsystem respectively. Then, the finite number N exist and verified the following: Pn − ΠP v ≤ cβ n , for all n ≥ N;
(16)
where c < ∞, β < 1. For the same precedent raisons we give the most important result about the deviation matrix DP associated to the imbedded Markov chain X¯n . Lemma 0.5. Let X¯n the imbedded Markov of the classical M/G/1//N queueing system and DP its deviation matrix. Then, DP is finite. Using Lemma 0.2, we obtain the following result about the required series expansion: Lemma 0.6. Let πP (resp. πQ ) the stationary distribution of the M/G/1//N classical system, (resp. M/G/1//N system with unique vacation), and DP the associated deviation matrix. Then, the series ∞
∑ πP ((Q − P)DP )n ;
(17)
n=0
converge normally then uniformly. This result is equivalent to say that the reminder term R(k) is uniformly convergent to zero. From the condition (C) and the Lemma 0.3, the sum function of the series 15 is the stationary vector πQ . Lemma 0.7. Let πP (resp. πQ ) the stationary distribution of the M/G/1//N classical system, (resp. M/G/1//N with vacation of the server), and DP the associated deviation matrix. Then, the series
Series Expansions in Queues with Server Vacation
πQ =
27
∞
∑ πP ((Q − P)DP )n ;
(18)
n=0
converge uniformly to the stationary vector πQ . From the work of Heidergot, we describe in this section a numerical approach to compute the supremum borne of the reminder term R(k). We ask about the number N as: δN = ((Q − P)DP )N 1 < 1, witch implies that the condition (C) is verified for N and δn . Then the limit of R(k) is obtained from: ((Q − P)DP )k+1 1 < c1δN . The performance measure for witch we are interesting is the mean number of costumers at the stationary state in the system. The considered entries parameters are: N¯ = 5, λ = 2, service rate → Exp(μs = 5), vacation rate → Exp(μv = 300). Our goal is to compute approximatively the quantities π w. The error to predict the stationary queue length via the quantities H(n) is then given and illustrated in the Figure1.
Fig. 1 Error in Average queue length
28
Fazia Rahmoune and Djamil A¨ıssani
The figure show that
π w − H(n)w | (19) π w is a graph on n. The numerical value of π w is 2.4956. For this example, we have obtained N = 14, δN = 0.9197 and c1δN = 201.2313. |
The algorithm terminates when the upper bound for R 1 given by c1δN R 1 is under the value ε . By taking ε = 10−2 , the algorithm compute π w juste to the precision 10−2 w 1 .
Fig. 2 Relative error of the upper bound of the remainder term
13
From this figure we conclude that π ∑ ((Q − Q)D)k w approximates π w with k=0
a maximal absolute error ε w 1 = 3 ∗ 10−2 .
4 Conclusion In this work, we have presented a part of research project on numerical algorithms based on series expansions of finite Markov chains. We are interested in the performance of a stochastic system when some of its parameters or characteristics are perturbed. This leads to an efficient numerical algorithm for computing the stationary distribution. We have shown theoretically and numerically that introducing a small
Series Expansions in Queues with Server Vacation
29
disturbance on the structure of maintenance policy in M/G/1//N system with multiples maintenances after modelling by queues with server vacation, we obtain the M/G/1//N system with single maintenance policy (periodic maintenance). Then characteristics of this system can be approximated by those of the M/G/1//N system with periodic maintenance, with a precision which depends on the disturbance, in other words on the maintenance parameter value.
References Altman E, Avrachenkov KE, N´un˜ ez-Queija R (2004) Perturbation analysis for denumerable Markov chains with application to queueing models. Advances in Applied Probability 36(3):839–853 Cao XR (1998) The Maclaurin series for performance functions of Markov chains. Advances in Applied Probability 30(3):676–692 Cho G, Meyer C (2001) Comparison of perturbation bounds for the stationary distribution of a Markov chain. Linear Algebra and its Applications 335(1–3):137–150 Coolen-Schrijner P, van Doorn EA (2002) The deviation matrix of a continuoustime Markov chain. Probability in the Engineering and Informational Sciences 16(03):351–366 Heidergott B, Hordijk A (2003) Taylor series sxpansions for stationary Markov chains. Advances in Applied Probability 35(4):1046–1070 Heidergott B, Hordijk A, van Uitert M (2007) Series expansions for finite-state Markov chains. Probability in the Engineering and Informational Sciences 21(03):381–400 Kirkland S (2003) Conditioning properties of the stationary distribution for a Markov chain. Electronic Journal of Linear Algebra 10:1–15 Meyn S, Tweedie R (1993) Markov chains and stochastic stability. Springer, London Puterman M (1994) Markov decision processes: discrete stochastic dynamic programming. John Wiley and sons Rahmoune F, A¨ıssani D (2008) Quantitative stability estimates in queues with server vacation. Journal of Stochastic Analysis and Applications 26(3):665–678
Part II
Rapid Modelling in Manufacturing and Logistics
Optimal Management of Equipments of the BMT Containers Terminal (Bejaia’s Harbor) Djamil A¨ıssani, Mouloud Cherfaoui, Sma¨ıl Adjabi, S. Hocine and N. Zareb
Abstract The BMT (Bejaia Mediterranean Terminal) Company of Bejaia’s harbor became aware that the performances of the terminal with containers is measured by the time of stopover, the speed of the operations, the quality of the service and the cost of container’s transit. For this end, the company has devoted several studies to analyze the performance of its terminal: elaboration of a global model for the ”loaded / unloaded” process, modeling of the system by another approach which consists of the decomposition of the system into four independent subsystems (namely: the ”loading” process, the ”unloading” process, the ”full-stock” process and the ”empty-stock” process - (see A¨ıssani et al, 2009; A¨ıssani et al, 2009). The models used in this last study describe in detail the comportment of the real systems and the obtained results given by the simulators corresponding to each model are approximately the same as the real values. It is the reason for which the company wants to exploit these models in order to determine an optimal management of its equipments. Indeed, this work consists, more specifically, in determining the optimal number of trucks to be used in each process that minimizes the waiting time of trucks and GQ (Gantry of Quay). This is a multi-objectives optimization problem, exactly a stochastic bi-objectives optimization problem. For that, we have modeled the problem by an open network which is the most suitable for this situation. After the identification of the process parameters, we conclude that the model is an open network of unspecified queues (G[X] /G/1, M/G/1, G/G/N/0, ...). In the literature, there is no exact method for analyzing this kind of networks. For this, we have established a simulation model that can imitate the functioning of each system. Djamil A¨ıssani (B), Mouloud Cherfaoui and Sma¨ıl Adjabi Laboratory LAMOS, University of Bejaia, e-mail: lamos
[email protected] Mouloud Cherfaoui e-mail:
[email protected] Sma¨ıl Adjabi e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 3,
33
34
D. A¨ıssani et al.
The simulations allowed us to evaluate the performances of the park with containers according to the number of the trucks used, on the basis of current conditions and in the case of variation of the flow of arrivals of ships and the service rate of the trucks. This allowed us to determine the optimal number of trucks to be used in the loading process and unloading process. We have also determined the performance of the stock, on basis of current conditions.
1 Introduction The performance of a Terminal with containers is measured by the time of stopover, the speed of the operations, the quality of the service and the cost of container’s transit. For that, in order to ensure the best functioning of the Terminal with containers at the level of the BMT Company (Bejaia Mediterranean Terminal - Bejaia’s Harbor), studies of performance’s evaluation were initiated. The first study was carried out in 2007 (Sait et al, 2007). It had for objective the global modeling of the unloading/loading process and had shown that if the number of ships [having a mean size of 170 ETU (Equivalent Twenty Units)], which was of 0.83 ships/day, increases to 1.4 ships/day, the full park will undergo a saturation of 94%. The second study was carried out in 2009 (A¨ıssani et al, 2009; A¨ıssani et al, 2009). It was intended to suggest an alternative approach for modeling the system. The authors proposed an approach which consists of decomposing the system into four independent subsystems, namely: the loading process, the unloading process, the full stock process and the empty stock process. The study showed that the park with containers has a possibility to handle 116226 ETU for an entry rate of 0.6104 ships / day for the loading process and 0.7761 ships/day for the unloading process. The study showed also that for a 30% increase in the number of ships arriving at the port of Bejaia, we note a small increase in the average number of ships in roads and in the quays. On the other hand, there is a clear increasing in the total number of treated containers which passes from 116226 ETU to 148996 ETU. We note also an increase in the average number of containers in the full park which passes from 3372 to 4874 ETU. As for the number of treated ships, it passes from 240 to 305 ships at the loading and from 296 to 382 ships at the unloading. In the present work, we propose to supplement this last study where we try to minimize the number of trucks to use in the treatment of the ships. The interest of this analysis comes from the fact that the permanent increase of traffic constrains the BMT Company to exploit other quays. To this end, in order to optimize the existing equipments, the problem will be modeled by an open network, which belongs to the multi-objectives optimization problems. Because of the complexity and the unavailability of analytical methods for analyzing this type of models, we apply the simulation approach.
Optimal Management of Equipments of the BMT Containers Terminal (Bejaia’s Harbor)
35
2 Park with containers and motion of the containers In this section, we are going to give a brief description of the Terminal of BMT Company, where we will present different operations and movements of a container and its capacities and equipments.
2.1 Motions of the containers Any container (ship) arriving at the Terminal of BMT Company passes by the following steps: • The step of anchorage: Any ship arriving at the Bejaia harbor’s is put on standby in the anchorage (roads) for a duration of time which varies from one ship to another, because of the occupation of the quay stations or unavailability of pilot or tug boats. • The step of service: • Service of accosting: The accosting of the ships is ensured by the operational section of the Harbor Company of Bejaia, such as the section of piloting and towing. • Vessel handling: the treatment of a ship is done mainly in three sub-steps: 1. Service before operations: It is the preparatory step of the ship for the handling (Loading/Unloading). 2. Step of Unloading/Loading: It consists of the unloading/loading of the containers. This is carried out with the two gantries of quay which have carriages being able to raise the containers from the container ships, to put them on trucks and to raise the container from the trucks and put them on board the container ship, if it’s the loading process. 3. Service after operations: It is the preparatory step of the ship for accosting towards outside. • Deliveries: The delivery concerns the full containers or discharged goods. The means used to perform this operation are: RTG (Rubber Tyre Gantry), trucks, stacker and forklifts if it’s necessary. • Restitution of the containers: At the restitution of the containers (empty containers), two zones are intended for the storage, one for empty containers of 20 units and the other for empty containers of 40 units.
2.2 The BMT Park with containers: capacity and equipments The Terminal of the BMT Company is provided with four quays of 500 m (currently only two are in the exploitation), a draught of 12 m starting from the channel, and a storage capacity of 10300 ETU, the Terminal with containers of Bejaia offers
36
D. A¨ıssani et al.
specialized installations for the refrigerating containers and the dangerous products. Moreover, this Terminal is the only Terminal with containers in Algeria, sufficiently equipped and has specialized equipments (Gantry of Quay, RTG . . .), handling and lifting, which can reduce the times of stopover, making it possible to fulfill waiting and the requirements of the operators (See Table 1). Table 1 Characteristics and equipments of the Terminal of BMT Company Quay /Anchorage Length: Depth: Basin surface: Quay: Utilisation rate of the quay Full Park Capacity: Area: Empty Park Capacity: Area: Refrigerating Park Capacity: Area: Zone for Capacity: Discharge / Potting Area:
500 m 12 m 60 h 4 70% 8300 ETU 68500 m2 900 ETU 15200 m2 500 Catches 2800 m2 600 ETU 3500 m2
Numbers: Tonnage: Type: Numbers: RTG Tonnage: (Rubber Tyre Gantry) Stacking: Stakers Numbers: Tonnage: Spreaders Numbers: Tonnage: Lifting trucks Numbers: Gantry of Quay
Truck-Tug
2 40 Tons Post Panamax 5 36 Tons 6+1 on the ground and 4+1 in Height 4 36 Tons 4 10 Tons 02 of 03 Tons, 02 of 05 Tons 02 of 10 Tons and 02 of 28 Tons Numbers: 8 of 60 Tons and 4 of 32 Tons
3 Mathematical Models After analyzing the main movements of a container at the level of BMT’s Terminal, we chose to model the problem by network, which is most suitable for this type of situation. To this effect, we obtained four models, namely: the empty stock, the full stock, the loading and the unloading processes which are given respectively by Figures 1 and 2.
Fig. 1 Diagram of the models of the storage processes
Optimal Management of Equipments of the BMT Containers Terminal (Bejaia’s Harbor)
37
Fig. 2 Diagram of the models of the ship’s treatments (unloaded/loaded process)
4 Calculation of the Forecasts In February 2009, a calculation of forecast had been carried out. The designed series is the number of containers treated (loaded/unloaded) in ETU. The data used were collected monthly and were held forth over a period of two years (from January 2006 to February 2009). The method used for calculation of the forecasts is the exponential smoothing method (David and Michaud, 1983). Figure 3 and Table 2 represent the original series of the number of containers in ETU, as well as the forecasts (from March to December 2009). Thus, it is noted that the objective that BMT Company had fixed at the beginning of the year was likely to be achieved.
5 Performance Evaluation of the BMT Terminal First, we will conduct a statistical analysis to identify the network corresponding to our system.
38
D. A¨ıssani et al.
Table 2 Original series and forecasts of the number of containers to be treated (ETU) in the year 2009 Months Historic Forecast 2006 2007 2008 2009 January 4938 6102 9695 10066 February 6006 10083 9928 11448 March 6445 8565 9882 11579.74 April 5604 9535 8791 11941.29 12314.13 May 6519 8938 10155 June 5909 8337 8799 12698.61 13095.09 July 6041 7582 9338 August 7552 7245 9304 13503.96 September 5915 8135 9171 13925.59 14360.38 October 5938 7982 8779 November 7858 7579 10984 14808.75 December 7636 9971 11596 15271.12 Total 133498.7
Fig. 3 Graph of original series and forecasts of the number of containers to be treated (ETU) in the year 2009
5.1 Statistical Analysis and Identification of Models The results of the preliminary statistical analysis (estimate and test adjustment) on the collected data for the identification of process parameters are summarized in Table 3. According to this preliminary analysis, we conclude that the performance evaluation of the Terminal of Bejaia is really a complex problem. Indeed, the system is modeled by an opened network of unspecified queues, because it consists of queues of type (G[X] /G/1, M/G/1, G/G/N/0, with blocking,...). Therefore, we cannot use analytical methods (as for the Jackson networks or BCMP) to obtain the characteristics of the system. This is why we will call upon the simulation approach to solve the problem.
Optimal Management of Equipments of the BMT Containers Terminal (Bejaia’s Harbor)
39
Table 3 Results of the statistical analysis on the collected data Process
Loading
Variable
Distribution Parameters of the distribution
Inter-arrivals of the ships to be loaded (minutes)
Exponential
service duration of the anchorage (minutes)
Normal
service duration of before operations (minutes)
Normal
Size of groups to be loaded
Geometric
μ = 99.175 et σ 2 = 38.678 p = 0.0059
service duration of the gantries of quay (minutes)
Normal
Service duration of the trucks (minutes)
Normal
μ = 2.944 et σ 2 = 1.097 μ = 8.823 et σ 2 = 5.359
service duration of the after operations (minutes)
Normal
μ = 99.175 et σ 2 = 38.678
Inter-arrivals of the ships to be unloaded (minutes) Exponential service duration of the anchorage (minutes)
Normal
service duration of before operations (minutes)
Normal
Unloading Size of groups to be loaded
Storage
λ = 2710 μ = 57.595 et σ 2 = 18.174
Geometric
λ = 2710 μ = 57.595 et σ 2 = 18.174 μ = 99.175 et σ 2 = 38.678 p = 0.007
Service duration of the gantries of quay (minutes)
Normal
Service duration of the trucks (minutes)
Normal
μ = 2.947 et σ 2 = 1.072 μ = 9.228 et σ 2 = 4.994
service duration of the after operations (minutes)
Normal
μ = 99.175 et σ 2 = 38.678
Size of groups of delivered containers/day
Uniform
Mean=145
Storage Size of groups of restored containers/day
Uniform
Mean=140
5.2 Determination of the optimal number of trucks by simulation In this section, the aim is to determine by simulation approach the optimal number of trucks to use during the loading process and unloading process. For that, we propose two approaches. 5.2.1 First approach We designed a simulator for each model under the Matlab environment. After the validation tests of each simulator, their executions provided the results summarized in Table 4. Where: • The 3rd column represents the mean number of ships in roads to be loaded (respectively to be unloaded) during one year. • The 4th column represents the mean number of ships loaded (respectively unloaded) during one year. • The 5th column represents the mean number of containers loaded (respectively unloaded) during one year. • The 6th column represents the mean number of blocking of the server ”GQ” according to the number of trucks used during the loading (respectively the unloading) on one year. • The 7th column represents the mean time of blocking of the server ”GQ” during the loading (respectively the unloading) on one year. • The 8th column represents the mean number of blocking of the server ”trucks” in the loading process (respectively the unloading) on one year.
40
D. A¨ıssani et al.
Table 4 Some performances of the processes obtained by simulation approach Process
N-trucks 1 2 3 4 5 Loading 6 7 8 9 10 11 12 1 2 3 4 5 Unloading 6 7 8 9 10 11 12
N-ship 1.20 0.87 0.88 0.79 0.78 0.85 0.83 0.89 0.84 0.93 0.84 0.84 0.90 0.90 0.73 0.80 0.87 0.73 0.90 0.77 0.77 0.83 0.70 0.73
D-ship N-Cts (ETU) 191.09 49020 193.88 49397 193.74 49681 196.08 50204 195.63 50298 196.59 50753 196.00 50090 194.59 49577 195.64 50233 194.34 49717 193.90 49260 192.93 49187 192.13 41689 191.57 41523 193.00 42321 197.20 42093 196.70 41971 191.33 41087 199.60 42798 195.13 42548 192.97 40966 194.90 43800 194.40 43401 192.97 41572
N-GQ 27495 18804 11775 7150.8 4749.0 3442.5 2577.0 2079.3 1725.9 1548.9 1316.9 1268.9 24112 15930 9035 3731.5 1025.2 166.83 19.80 1.50 0.10 0 0 0
W-GQ 3349.5 1118.1 449.2 196.7 103.8 66.3 47.5 38.1 32.2 29.3 25.6 25 2955.5 947.7 342.8 102.4 21.6 2.9 0.3 0 0 0 0 0
N-trucks W- trucks Proportions 2927 72.6 0.0909 11878 342.1 0.3642 18797 588.3 0.5757 22876 761.4 0.7094 25723 899.6 0.7818 27543 1001.7 0.8190 27453 1030.7 0.8365 27699 1068.1 0.8463 27428 1080.7 0.8496 28240 1133.8 0.8517 26870 1095.6 0.8509 28594 1181.4 0.8512 2099 50.40 0.0770 9289 259.90 0.3418 13713 406.95 0.4952 15794 435.37 0.5734 16544 516.73 0.6023 16386 548.40 0.6095 17103 537.75 0.6103 17002 545.60 0.6106 16378 549.78 0.6109 17525 551.85 0.6115 17359 546.22 0.6111 16607 539.27 0.6104
• The 9th column represents the total mean time of blocking of the server ”trucks” in the loading (respectively the unloading) on one year. • The 10th column represents the probabilities of the blocking of the servers ”trucks” in the loading process (respectively the unloading process); for example: the value 0.5757 of the third row represents the blocking probability in the case of three servers ”trucks” in the loading process, which is the sum of the probabilities of blocking of one server, two servers and three servers. These probabilities is distributed as following: P(X = 0) = 0.4276, P(X = 1) = 0.4117, P(X = 2) = 0.1492, P(X = 3) = 0.0148, where X: ’number of servers ”trucks” blocked and P(X = 1) + P(X = 2) + P(X = 3) = 0.5757. This distribution is illustrated by the figure (left).
Interpretation and discussion of the results • Loading process • From the obtained results, we note that the variation of the mean number of the loaded containers in ETU during one year is independent of the number of trucks used . Indeed, the mean number of the loaded containers varies only between
Optimal Management of Equipments of the BMT Containers Terminal (Bejaia’s Harbor)
41
Fig. 4 Probabilities of blocking of the servers ”trucks”: case of three trucks (loading process on the left and unloading process on the right)
49019.5401 ETU and 50753.3083 ETU which is practically the same. This independence can be explained by the fact that the inter-arrivals of ships to be loaded are very large compared to the time spent by ship at the quay. Similarly, for the loading process, the mean number of ships on the roads is also independent of the number of trucks used, except in the case of one truck, where the mean number of ships in roads is a bit high (1.2000 ships). But, according to the dashed curve, we note that the mean waiting time of the trucks (blocking of servers ”trucks”) is proportional to the number of the used servers ”trucks”. So, to minimize the blocking duration of the servers ”trucks”, we must use the minimum possible of servers ”trucks”. • This problem can be formulated (written) mathematically as follows: min ←− T1 , min ←− T2 ,
min ←− (T1 , T2 )
⎧ ⎧ or ⎨ Capacities of the company, ⎨ Capacities of the company, S.C. Available equipments, S.C. Available equipments, ⎩ Processing time. ⎩ Processing time. (1) Where T1 and T2 represent respectively the mean waiting time of the server ”GQ” and the mean waiting time of the servers ”trucks”. Here, we note that we are facing a stochastic multi-objectives optimization problem, specifically a stochastic bi-objectives problem. So, to determine the optimal number of trucks to use in the loading process, it is necessary to find a compromise between the blocking time of the server ”GQ” and the blocking time of the servers ”trucks”. It is thus necessary to find the number of servers ”trucks” which minimizes the blocking time of the server ”GQ” and minimizes the blocking time of the servers ”trucks” at the same time. So, we transform the problem (1) to the following form Weighted Sum Scalarization ( see Ehrgott, 2005; Bot et al, 2009):
42
D. A¨ıssani et al.
min ←− α T1 + (1 − α )T2 , ⎧ ⎨ Capacities of the company, S.C. Available equipments, ⎩ Processing time.
(2)
Where: α is weight reflecting the preference of the waiting time of the server ”GQ” and 1 − α the weight reflecting the preference the waiting time of the servers ”trucks”. In this work, we assume that there is not a preference between the waiting of the servers ”trucks” and the waiting of server ”GQ” i.e α = 0.5. In this case, we determine the minimum of the sum of the blocking times of the server ”GQ” and servers ”trucks” which is represented by the solid curve on the Figure 5 (left). According to this curve, we note that the optimal number of trucks to be used (which minimize the sum of the blocking times for loading process) is four (04) trucks. • Unloading process: With the same manner and same reasoning as in the loading process, we can determine the optimal number of trucks that will be used in the unloading process. In this case, the result is also four (04) trucks.
5.2.2 Second approach In this part, we propose another approach (reasoning) to determine the optimal number of trucks to be used. This method consists of determining the number of trucks to use in order to minimize the mean time of loading or unloading of a ship (beginning operations - end operations). The obtained results for different number of servers ”trucks” used are summarized in the table 5. Table 5 The Variation of the mean time (hours) of the loading/unloading service, according to the number of servers ”trucks” Number of trucks 1 2 3 4 5 6 Loading service 26.9163 13.9531 10.3773 8.2871 8.2671 8.4758 Unloading service 23.8718 13.4766 10.2773 9.1891 9.0743 9.3003 Number of trucks 7 8 9 10 11 12 Loading service 8.3842 8.4986 8.4049 8.4796 8.4955 8.3235 Unloading service 9.1870 8.9528 8.6114 8.9666 9.0482 8.6716
Interpretation and discussion of results Loading process: Figure 5 (right) and the second row of the Table 5 show that the mean time of loading service decreases with the number of servers ”trucks” from (01) to four (04) servers ”trucks”, and from four (04) trucks, the mean time of loading service remains almost constant, which means that beyond four (04) servers
Optimal Management of Equipments of the BMT Containers Terminal (Bejaia’s Harbor)
43
”trucks”, the mean time of loading service depends only on the ability of the server ”GQ”. To this end, we conclude that we no interest to use more than four (04) servers ”trucks” in the loading process. So the optimal number of trucks towing in this case is four (04) trucks. Unloading process : for the same arguments as the loading process, the optimal number for the unloading process is four (04) trucks.
Fig. 5 The mean waiting time of the servers ”trucks” and ”GQ” on one year (left) and the variation of the mean time of loading service (right) according to the number of servers ”trucks”
6 Performance study of storage process After the validation tests of empty stock and full stock simulators, their executions provided the results summarized in Table 6. Where the 2nd, 3rd and 4th column repTable 6 Storage performances Parameters Number of trucks ETU saturation (%) Full stock 4 4570.9995 55.0722 5 4440.8319 53.5044 Empty stock
4 5
1157.5036 1192.9385
128.6115 132.5487
resents respectively the number of servers ”trucks” used, the total mean number of containers (ETU) in the full stock and empty stock on one year and their saturation rate expressed as a percentage.
Interpretation and discussion of the results The simulation results show that:
44
D. A¨ıssani et al.
• With the current parameters, the average number of containers in the full park over a period of one year is 4570.9995 ETU in the case of four (04) ”trucks”, and 3610.8734 ETU in the case of five (05) ”trucks” and the mean number of containers in the empty park over a period of one year is 1157.5036 ETU in the case of four (04) ”trucks” and 1192.9385 ETU in the case of five (05) servers ”trucks”.
7 Conclusion The objective of this work is to determine an optimal management of the equipments of the Terminal with containers of the BMT Company, more specifically the optimal number of trucks to use in the loading process and unloading process. For this, we developed a mathematical model for each process (the ”loading”, the ”unloading”, the ”full stock” and the ”empty stock” process). Indeed, in order to analyze the different processes and determine the optimal number of trucks to use, each system (process) is modeled by an open network. We have also established a simulation model of each system, where the goal of each simulator is to reproduce the functioning of the park with containers. The study shows that: • For the loading process: For an arrival rate of 0.5317 ships/day, a mean service trucks of 8.8234 minutes and a mean service GQ of 2.9440 minutes, the optimal number of trucks is four (04) trucks. This mean that the BMT Company can recover a truck from each GQ, i.e in total two (02) trucks. • For the unloading process: For an arrival rate of 0.5317 ships/day, a mean service trucks of 9.2281 minutes and a mean service GQ of 2.9473 minutes, the optimal number of trucks is four (04). This mean that the BMT Company can recover a truck from each GQ, i.e in total two (02) trucks. • Regarding the stock: The study shows that for the current settings at the end of the year 2009 it will undergo a saturation of 55% for the full stock and 130% for the empty stock, hence the need for expanding the capacity of empty stock. It would be interesting to achieve this work, by discussing the following items: • An analytical resolution of the problem. • Determination of an optimal management of the others equipments of the BMT Company. • Take account the variation of the parameters of the system.
References A¨ıssani D, Adjabi S, Cherfaoui M, Benkhellat T, Medjkoune N (2009) Evaluation des performances du terminal a´ conteneurs B.M.T. (Port de B´ejaia).
Optimal Management of Equipments of the BMT Containers Terminal (Bejaia’s Harbor)
45
In: Actes du S´eminaire International ”Terminal Development and Management”, vol PORTEK Company and B.M.T. (Bejaia Mediterranean Terminal) A¨ıssani D, Adjabi S, Cherfaoui M, Benkhellat T, Medjkoune N (2009) Forecast of the traffic and performance evaluation of the BMT container terminal (Bejaia’s harbor). In: Reiner G (ed) Rapid Modelling for Increasing Competitiveness: Tools and Mindset, Springer Verlag Ed. (Germany), pp 53–64 Bot R, Grad S, Wanka G (2009) Duality in Vector Optimization. Springer Verlag David M, Michaud J (1983) La pr´evision: approche empirique d’une m´ethode statistique. Masson Ehrgott M (2005) Multicriteria optimization, 2nd edn. Springer Verlag Sait R, Zerrougui N, Adjabi S, A¨ıssani D (2007) Evaluation des performances du parc a´ conteneurs de l’Entreprise Portuaire de B´ejaia. In: Proceedings of an International Conference Sada’07 (Applied Statistics for Development in Africa), Cotounou (Benin)
Production Inventory Models for a Multi-product Batch Production System Ananth Krishnamurthy and Divya Seethapathy
Abstract This paper presents an analytical model for capacitated batch production systems with the objective of investigating the effect of system utilization, production batch size, and production lead time on performance measures such as backorders and on-hand inventory levels. First, an exact analysis of a single product system and two product system is presented using the Markov chain approach. To overcome the computation complexity resulting from the explosion of state space, an alternative approach based on decomposition is proposed to analyze multi-product systems with more than two products. Numerical results indicate that the decomposition approach provides fairly accurate estimates of the performance measures. The analytical model also provides some useful insights on the effect of the production batch size and utilization on system performance. Key words: Batch production systems, Markov chain, multi-product, inventory control
1 Background and Literature Review Batch production systems find many applications in manufacturing. Typical applications include heat treat operations, annealing furnaces, and various forms of plating and coating operations. The production batch size used in these operations is typically governed by the physical constraints of the operation, namely size of furnace, capacity of tanks holding the liquid required for the coating operations. However,
Ananth Krishnamurthy (B) and Divya Seethapathy Department of Industrial and Systems Engineering, University of Wisconsin, Madison, WI 53706, USA, e-mail:
[email protected] Divya Seethapathy e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 4,
47
48
Ananth Krishnamurthy and Divya Seethapathy
these production batch sizes in many applications are much larger than the typical demands for the products from the customer. Consequently, managing production inventory decisions under these constrained environments poses an interesting analytical problem. Figure 1 describes a typical batch production environment operating in a make to stock environment. Fig. 1a provides a schematic description while Fig 1b provides a queuing model representation of the same system. External demands follow a random process and customer demands are satisfied from available finished goods inventory. Customer demands that cannot be satisfied immediately from stock are backordered. Assume that the system starts with an initial finished goods inventory level corresponding to Q + r, where Q represents the production batch size and r represents the re-order point of the system. As external demands arrive (one at a time), they are satisfied from available stock. When the level of finished goods inventory reaches the re-order point, an order for Q units is placed with the supplier. As additional demands arrive, inventory depletes further and might trigger additional orders, each of size Q units. At the manufacturing station (for instance, a heat treat furnace), batches of Q units undergo processing simultaneously for a random service time. Upon service completion, a full batch of Q units is delivered to finished goods inventory. Customer demands that are not satisfied from existing inventory are backordered and satisfied first, when inventory becomes available. In the queuing model described in Fig. 1b, the production facility is assumed to have a lead time, L. Finished goods inventory and backordered demands are represented by buffers FG and BO in a synchronization station and a batch queue, BQ models the batch formation process prior to release of a production order of size, Q.
Fig. 1 Schematic (a) and queuing model (b) of a batch production system
In terms of associated literature, the production inventory system being analyzed has strong relationship with the classical Q, r inventory model under continuous review. However, the model discussed in this paper corresponds to a capacitated system (see Benjaafar et al (2004); Buzacott and Shanthikumar (1993); George and Tsikis (2003) and the references therein), while a majority of prior studies have
Production Inventory Models for a Multi-product Batch Production System
49
focused on un-capacitated systems (see Hopp and Spearman (2000); Thoneman and Bradley (2002); Nahmias (1993); Zipkin (1995) and the references therein). In the studies involving un-capacitated systems, production lead times at the manufacturing facility have either been assumed to be deterministic or assumed to follow a specific probability distribution. Typically, this distribution has been is independent of the load on the system. However, the distribution of the production lead time depends on several parameters such as demand and service rates, and more importantly the batch size, Q. This explicit dependency between the production inventory policy parameter, Q and the production lead time L, has received limited attention in the literature. The objective of this paper is to build analytical models for multi-product batch production systems that could be used to investigate how performance measures such as average backorders and average finished goods inventory levels vary with the parameters of the production inventory control policy, namely, Q and r. These models would also help to study the effect of product variety on system performance. The analysis approach and outline followed in this paper is as follows. First, in Section 2, an exact analysis of a single product batch production system is carried out using a Markov chain analysis. Next, the approach is extended to analyze a batch production system with two products in Section 3.1. Although a two-product batch production system can be analyzed exactly using Markov chains, the approach is not scalable to systems with large a number of products. Therefore, a decomposition based approach is developed in Section 3.2. The approach uses the notion of “machine availability” to decompose a two-product system into two separate oneproduct systems. The accuracy of the approach is validated using numerical experiments and the analytical model is used to carry out studies that provide insights in the behavior of various performance measures. In Section 4, an extension of this approach to general systems with multiple products is presented. Section 5 presents the summary and conclusions of this research.
2 Batch Production System with a Single Product This section presents the analytical model for a capacitated batch production system with a single product. Figure 2 describes the system. In the figure, MFG represents the manufacturing station and FG, BO, and BQ represent the buffers of finished goods inventory, backordered demands, and batch formation queue, respectively. The system is assumed to operate under a continuous review policy. External demands from customers are assumed to arrive for single units according to a Poisson process with parameter, λ . Orders to suppliers are placed for fixed quantities, Q, when the on-hand inventory in the finished goods buffer reaches the re-order point, r. The manufacturing facility, MFG is assumed to be composed of a single server with service times that have an exponential distribution with mean, μ −1 . The three performance measures of interest are on-hand inventory (OH), average backorders (BO), and on-order inventory (OO). On hand inventory represents the
50
Ananth Krishnamurthy and Divya Seethapathy
Fig. 2 Analytical model for a capacitated batch production system with a single product
number of finished products in the FG buffer. Backorders represent the number of orders in BO and on-order inventory being or waiting to be processed at MFG. Since orders are placed in batches of Q units, for this batch production system, the inventory position R satisfies the following equality: Inventory Position, R = On-Hand Inventory (OH) +On Order (OO) − Backorders (BO) . Under these assumptions stated above, the dynamics of the system can be modeled precisely as a Markov chain. The state of the system at any time can be represented using the state variable i = iFG − iBO , where iFG denotes the number of finished goods at FG, and iBO denotes the number of backordered demands waiting at BO. The state space of the system, S is defined as S = {r + Q, r + Q − 1, r + Q − 2...r. . .1, 0, −1, −2, . . ., −∞}. Note that since, orders that cannot be satisfied from stock are backordered, a statei¡0 implies that a stock-out has occurred and all future orders would be backordered till subsequent supplier deliveries increase on-hand inventory to positive levels. Figure 3 describes the Markov chain for this system.
Fig. 3 Markov chain for a capacitated batch production system with a single product
Let πi represent the steady state probability of the system being in state i. Then the Chapman- Kolmogorov equations for steady state can be written as follows:
λ πQ+r = μπr for i = r + Q λ πi = μπi−Q + λ πi+1 for i = r + 1 to r + Q − 1 (μ + λ )πi = μπi−Q + λ πi+1 for i = r to − ∞ .
Production Inventory Models for a Multi-product Batch Production System
51
These equations are solved using the additional relation involving summation of probabilities,
∏e = 1 . The solution of the state equations gives the value of the steady state probabilities, πi . The performance measures are subsequently expressed in terms of πi as shown below: ∞
Expected On − hand Inventory, E(I) = ∑ iπi i=1
Expected Backorders, E(B) =
−1
∑
| − i|πi
i=−∞
Expected On-orders, E(O) =
∞
∑
r− jQ
jQ
j=1
∑
πi .
i=r−( j+1)Q−1
Closed form expressions for the steady state probabilities and performance measures can be derived in terms of the r0 the unique root of the equation (in operator notation)
μ DQ+1 − (μ + λ )D + λ = 0 . where r0 is such that
λ = μ
r0 1 − r0Q 1 − r0
.
Then the key performance measures are given by the following equations, namely: r0r+1 r0 Q+1 − +r+ρ E(I) = 2 1 − r0 1 − r0 r0r+1 E(B) = ρ 1 − r0 r0 E(O) = . 1 − r0 Note that in the above equations, the utilization of the manufacturing station, ρ = λ /(μ Q) is a function of the batch size, Q. In order to verify the analysis, numerical experiments are conducted by varying the input parameters and examining the values of the performance measures under different settings. In all experiments, the service rate, μ is set to be equal to 1. Further, the batch size, Q is varied so that it takes the values: 1, 5 and 10, respectively. The demand rate λ is varied so that in
52
Ananth Krishnamurthy and Divya Seethapathy
all cases, the server utilization ρ takes values 60%, 70%, 80%, and 90%. Sample results from these experiments are displayed in Fig. 4.
(a)
(b) Fig. 4 Performance tradeoffs for a batch production system with a single product
Production Inventory Models for a Multi-product Batch Production System
53
From the figure it can be observed that the expected number of backorders increase with utilization, i.e. E(B) increases with utilization, ρ . Further, the expected number of backorders increases with the batch size, Q, i.e. E(B) increases with batch size, Q. The graphs also indicate that the expected value of inventory that is on-order at MFG increases with utilization and the batch size, Q. This increase correlates with the increase in lead time for these parameters, suggesting that performance measures in a capacitated system show obvious relationships with the capacity utilization of the manufacturing station. The next section describes the analysis of a system with two products.
3 Batch Production System with Two Products This section presents the analysis of a batch production system with two products. An exact analysis is described in Section 3.1 while a decomposition based approximation is discussed in Section 3.2.
3.1 Exact Analysis Figure 5 describes the analytical model for a capacitated batch production system with N = 2 products. As observed in the figure, the manufacturing station represented by MFG, is shared by the two products. However FGn , BOn and BQn (n = 1, 2) represent the buffers of finished goods inventory, backordered demands, and batch formation queue, for each of the two products respectively. The system is assumed to operate under a continuous review policy. External demands for each product n arrive for single units at a time according to a Poisson process with parameter, λn . Orders for product n are placed for fixed quantities, Qn , when the on-hand inventory in the finished goods buffer for each reaches the re-order point, rn . The manufacturing facility, MFG is assumed to be composed of a single server with service times that has an exponential distribution with mean, μ −1 , independent of the product being processed at the station. It is assumed that the station does not process mixed batches of different products in the furnace. However, the station does use a priority rule while choosing batches for production. When the there are multiple batches waiting in queue for processing, the server chooses the next batch to be processed based on the current inventory level of the different products. In particular, if at the instant of service completion, BO1 > BO2 , then a batch of product 1 is released to potentially raise the inventory level of product 1. If there are no backorders for either product (i.e. when BO1 = BO2 =0) and if FG1 < FG2 , then a batch of product 1 is released to potentially raise the inventory level of product 1. Finally, if finished goods inventory levels are also equal (i.e. when FG1 = FG2 ), then a batch of product 1 is released if λ1 > λ2 . Otherwise, a batch of product 2 is released.
54
Ananth Krishnamurthy and Divya Seethapathy
Fig. 5 Analytical model for a capacitated batch production system with two products
Based on these assumptions, the dynamics of the system can be modeled precisely as a Markov chain. However, unlike the single product system, in order to represent the state of the system at any time, three parameters (i, j, k) are required; where i = iFG1 − iBO1 , j = iFG2 − iBO2 , and k = 0, 1, or 2 depending on whether the server is idle or processing product 1 or 2 respectively. The Markov chain can be divided into 4 regions and Fig. 6 shows the transitions in the different regions.
Fig. 6 Regions in the Markov chain for batch production system with two products
As with the case of the single product, the performance of a system with two products is analyzed by solving the balance equations for the underlying Markov chain. The solution for the balance equations yields the steady state probabilities for the system with two products. Expressions for performance measures such as average on-hand inventory E(In ), average backorders, E(Bn ), and average on-orders E(On ); for each product n = 1, 2 are written in terms of these steady state probabilities. Numerical results obtained from these expressions are discussed in the subsequent sections. However, the approach of determining the performance measures through the exact solution of the underlying Markov chain poses computational
Production Inventory Models for a Multi-product Batch Production System
55
difficulties for systems with a large number of products, due to the explosion of the size of the underlying state space. There an alternative approach based on decomposition is proposed. This is discussed next.
3.2 Decomposition Based Approximation The main motivation behind the approximations is the observation that when the manufacturing station is processing product 1, the station is “unavailable” to process product 2. Similarly, when the manufacturing station is processing product 2, the station is “unavailable” to process product 1. Let the availability of the manufacturing station to serve product n be defined as An . Then the effect of unavailability is modeled by defining an effective service time τ˜n for product n at the manufacturing station. For the two product system, these effective service times are written as: 1 = τ˜1 = μ˜ 1 1 For product 2 : = τ˜2 = μ˜ 2
For product 1 :
1/μ1 where A1 = 1 − ρ2 A1 1/μ2 where A2 = 1 − ρ1 . A2
Then the performance of the two product system can be analyzed by solving two separate single product system with these effective service times.
Fig. 7 Decomposition based approach for analysis of two product systems
56
Ananth Krishnamurthy and Divya Seethapathy
Each single product system is then analyzed using the approach described in Section 2 to obtain the values of r0n for each product, n = 1, 2 from the following equation:
Q r 1 − r 0n 0n λn = for n = 1, 2. μ˜ n 1 − r0n Then, the performance measures E(Bn ), E(In ), and E(On ) for n = 1, 2 are obtained using r0n , n = 1, 2 using the following equations: rn +1 r0n r0n Q+1 − E(In ) = + rn1 + ρn for n = 1, 2 2 1 − r0n 1 − r0n rn +1 r0n for n = 1, 2 E(Bn ) = ρn 1 − r0n r0n E(On ) = for n = 1, 2. 1 − r0n In order to verify the analysis and the accuracy of the approximation approach, numerical experiments were conducted by varying the input parameters and examining the values of the performance measures under different settings. In all experiments, the service rate, μ was set to be equal to 1. Further, the batch size, Q, was varied so that it took the values: 1, 5 and 10, and the demand rate λn was varied so that in all cases, the value of the server utilization ρ took values 60%, 70%, 80%, 90%. The results of the analytical model were obtained using a Matlab program. These results were further validated against detailed simulation done using Arena simulation software. The simulation model was run for 100 000 h and the number of replications chosen for each scenario was 12. A sample of the results for expected backorders and expected on-hand inventory for product 1 are displayed in Fig. 8. From Fig. 8 it can be observed that the decomposition based approximation approach provides reasonably accurate estimates of the performance measures. This implies that fairly accurate estimates of performance measures can be obtained through the decomposition approach. Next, the trends related to the performance measures are discussed. It is observed that, as with the single product system, the expected number of backorders increase with the utilization of the manufacturing station (i.e. E(B1 ) increases with ρ ). Further, the expected number of backorders increases with the batch size, i.e. E(B1 ) increases with batch size, Q. Correspondingly, it is also noted that the expected on-hand inventory decreases with increase in utilization of the manufacturing station. Although larger batches decrease the utilization of the manufacturing station, larger batches also reduce the frequency of orders placed to the supplier which in turn causes increase in the backordered demands. These results suggest that the performance measures of a capacitated batch production system are significantly influenced by capacity utilization of the manufacturing station and its effect on the replenishment lead time for finished goods
Production Inventory Models for a Multi-product Batch Production System
57
Fig. 8 Performance tradeoffs for a batch production system with two products
inventory. The next section describes the extension of the approximation approach to analyze a system with N > 2 products.
4 Batch Production Systems with Multiple Products This section provides a brief description of the extension of the approximation approach to a system with N > 2 products. The basic idea is to decompose the N-product system into N single product systems that each models the performance of a single product in the N-product system (see Fig. 9). As with the two product system, the main idea is that, when the manufacturing station is processing product n, the station is “unavailable” to process all the other products. If the availability of the manufacturing station to serve product n, is defined as An , then, the effect of this unavailability can be incorporated into each single product model by defining the effective service time τ˜n for product n at the manufacturing station. For the two product system, these effective service times are written as follows: 1 1/μn = τ˜n = where An = 1 − ∑ ρi for n = 1, . . ., N. μ˜ n An i =n
58
Ananth Krishnamurthy and Divya Seethapathy
Fig. 9 Decomposition based approach for analysis of multi-product systems
Having defined the effective service times, each single product system can analyzed using the approach described in Section 2 to obtain the values of r0n for each product, n = 1, . . . , N using the following equation:
Q r0n 1 − r0n λn = for n = 1, . . ., N. μ˜ n 1 − r0n Finally, the performance measures E(Bn ), E(In ), and E(On ) for n = 1, . . . , N can obtained using r0n using methods described in Section 3.2.
5 Summary and Conclusions This paper presents an analytical model for capacitated batch production systems with the objective of investigating the effect of system utilization, production batch size, and production lead time on performance measures such as backorders and onhand inventory levels. First, an exact analysis of a single product system is presented using the Markov chain approach. Next, the approach is extended to analyze systems with two products. Due to the computation complexity resulting from the explosion of state space, an alternative approach based on decomposition is proposed to analyze the two product systems using the notion of “server unavailability”. Numerical results indicate that the decomposition approach provides fairly accurate estimates
Production Inventory Models for a Multi-product Batch Production System
59
of the performance measures. Finally the paper presents a framework that demonstrates how this approach could be extended to analyze systems with N > 2 products. The analytical model provides some useful insights on the effect of the production batch size and utilization on system performance. It is observed that expected number of backorders, E(Bn ) and expected number of orders on order, E(On ) increase with batch size, Q, and utilization, ρ . In contrast, expected on hand inventories, E(In ) decrease with increase in batch size, Q, and utilization, ρ . The ongoing work includes extensive numerical validation of the approximation approach for various N-product systems, and estimation of optimal production parameters that minimize overall system costs.
References Benjaafar S, Kim JS, Vishwanadham N (2004) On the Effect of Product Variety in Production Inventory Systems. Annals of Operations Research 126:71–101 Buzacott J, Shanthikumar J (1993) Stochastic Models of Manufacturing Systems. Prentice Hall George, Tsikis I (2003) George and isidoros tsikis, 2003, “unified modelling framework of multistage production-inventory control policies with lot sizing and advance demand information. In: Shanthikumar J, Yao D, Zijm We (eds) Stochastic Modeling and Optimization of Manufacturing Systems and Supply Chain, Chapter (11), Kluwer Academic Publishers, pp 271–297 Hopp W, Spearman M (2000) Factory Physics: Foundations of Factory Management. Irwin/McGraw Hill Nahmias S (1993) Production and Operations Analysis, 4th edn. McGraw Hill Irwin, Chicago, IL. Thoneman U, Bradley J (2002) The Effect of Product Variety on Supply Chain Performance. European Journal of Operations Research 143(3):548–569 Zipkin P (1995) Performance Analysis of Multi-item Production Inventory System under Alternative Policies. Management Science 41:690–703
Dependency Between Performance of Production Processes and Variability – an Analysis Based on Empirical Data Martin Poiger, Gerald Reiner and Werner Jammernegg
Abstract It is commonly accepted that variability is one of the main challenges in designing and managing manufacturing processes. Many process improvement concepts that focus primarily on communication and information exchange, flow time reduction, etc., finally influence variability. In particular, they reduce variability or mitigate the operational effects of variability. Vendor managed inventory or collaborative planning, forecasting, and replenishment are just two examples of such concepts. The effect on selected performance measures is mostly shown by idealized quantitative models, but there are only few results from real-world processes. In our study we want to illustrate and quantify the impact of variability on the performance of production processes by means of two real manufacturing processes. Case process one is an assembly process of frequency inverters and case process two is the assembly process of sliding glass top systems. For the inverter assembly process we want to show the operational impact of reduced demand variability (reduced forecast error), achieved by implementing Vendor managed inventory as well as collaborative planning. In the glass top assembly process internal variability is addressed by assessing the impact of the production lot size. Both processes are mainly evaluated by using WIP and flow time as key performance measures. The analysis is conducted with rapid modeling software based on open queuing networks. Our results show that a reasonable decrease in inventory and flow time can be achieved without any decline of customer service. Martin Poiger (B) University of Applied Sciences BFI Vienna, Wohlmutstrasse 22, A-1020 Wien, Austria, e-mail:
[email protected] Gerald Reiner Enterprise Institute, Faculty of Economics, University of Neuchˆatel, Rue de la Maladi`ere 23, CH-2000 Neuchˆatel, Switzerland, e-mail:
[email protected] Werner Jammernegg Institute for Production Management, Vienna University of Economics and Business Administration, Nordbergstrasse 15, A-1090 Wien, Austria, e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 5,
61
62
Martin Poiger et al.
1 Introduction It is commonly accepted that variability is one of the main challenges in designing and managing manufacturing processes. Increasing variability always deteriorates process performance (Hopp and Spearman, 2006). Dealing with variability, respectively removing variability from one’s processes significantly differentiates high-performing plants from low-performing plants (Mapes et al, 2000). Klassen and Menor (2007) provide a useful classification of variability using the dimensions source and form, where source is divided into internal and external variability, and form into random and predictable variability. Examples for internal random variability are quality defects, equipment breakdown, or worker absenteeism and for internal predictable variability preventative maintenance, setup time, or product mix. Examples for external random variability are arrival of individual customers, transit time for local delivery, or quality of incoming supplies, and for external predictable variability daily or seasonal cycles of demand, technical support following new product launch, or supplier quality improvements based on learning curve.For each of these types there exist many strategies to decrease variability or mitigate its effects. Concerning internal variability the Toyota Production System is probably the most famous concept to remove variability in processes. Regarding external variability supply chain management with its collaboration concepts has to be mentioned. Vendor managed inventory (VMI) or collaborative forecasting, planning and replenishment (CFPR) are just two of plenty strategies, which primarily focus on communication and information exchange. Generally there are two major streams in operations management research. On the one hand, there are plenty of surveys based on perceptions of managers or decision makers (Rungtusanatham et al, 2003). Such studies usually do not address process physics based on real process data such as cycle time or on-hand inventory. On the other hand, there are analytical models dealing with idealized problems, which are far abstracted from reality in order to make them tractable for mathematical analysis (Bertrand and Fransoo, 2002). Due to this abstraction it is often difficult to transfer solutions from such models to real-life problems. Therefore, Bertrand and Fransoo (2002) suggest quantitative model-driven research based on empirical data. This is in line with Silver (2004), who promotes a holistic approach of operations management, concentrating on improving system performance under consideration of uncertainty and continual change, instead of optimization under existing deterministic conditions. The research methodology suggested by Bertrand and Fransoo can be regarded as empirical normative research, conducting all four steps of the so-called Mitroff-Cycle (Mitroff et al, 1974), which consists of conceptualization, modeling, model solving and implementation. Especially the implementation is difficult to conduct and observe. Therefore the empirical normative research is not found very often in the scientific literature. In our study we analyze ex ante and ex post two real-world production processes. For the ex ante analysis, mainly consisting of modeling and evaluation of process alternatives, we used rapid modeling (Suri et al, 1995) based on open queueing networks. The ex post analysis was conducted after finalized implementation of the
Dependency Between Performance of Production Processes and Variability
63
suggested process improvements and includes the observation of the impact on the process performance. Case process one is an assembly process of frequency inverters and case process two is the assembly process of sliding glass top systems. For the inverter assembly process we show the operational impact of reduced demand variability (reduced forecast error), achieved by implementing Vendor-managedinventory as well as collaborative planning. In the glass top assembly process internal variability is addressed by assessing the impact of the production lot size. Both processes are mainly evaluated by using work-in-process (WIP) and flow time as key performance measures. Our paper is organized as follows. Section 2 provides an overview of the theoretical background including relevant literature. In this section we present the necessary basics from queuing, bring up the operations management triangle (Schmidt, 2005), and introduce the software we use for our analysis. In Section 3 we show the impact of demand variability on process performance by means of the production process of a frequency inverter. In Section 4 the model is presented, which shows the impact of batching. Finally we conclude our paper in Section 5, by summarizing the most important results and deriving some managerial implications.
2 Theoretical Background 2.1 Queuing Theory For analyzing manufacturing systems queuing theory plays an important role. A queueing system usually consists of an arrival process, a production or service process, and a queue (Hopp and Spearman, 2006). The expected time spent by a job in such a system is called flow time T , defined by the sum of the mean effective process time te plus the expected waiting time tq (1). T = te + tq .
(1)
The mean effective process time te is the average time required to process one job and includes setups, downtime, etc. It determines the capacity of such a system, as the possible average flow rate re is the reciprocal value of te . The waiting time tq depends on te , on the utilization ρ and on the variability of both process and interarrival time. The utilization ρ is the probability that a station is busy, and is calculated by dividing mean arrival rate ra by mean flow rate re (2). The mean arrival rate ra is the average number of jobs coming to the station per time unit (reciprocal value of mean interarrival time ta ). ra ρ= . (2) re Variability in queueing systems is usually measured by the coefficient of variation. The coefficients of variation of the process time (ce ) and of the interarrival time (ca ) are defined by (3), in which σe denotes the standard deviation of the effective
64
Martin Poiger et al.
process time te , and σa the standard deviation of the interarrival time ta . ce =
σe te
ca =
σa . ta
(3)
Using the parameters above waiting time tq can be calculated by (4). tq = t e
ρ (c2e + c2a ) . (1 − ρ ) 2
(4)
In general, this is a good approximation for waiting time in the so-called G/G/1 queueing model, if process and interarrival times are “generally” distributed, i.e. they are specified by mean and standard deviation without any specific information about the distribution. For further explanation we refer the reader to Hopp and Spearman (2006). This model clearly shows that waiting times rise with process time, with average utilization and with variability. For managers it follows that in case of high variability it is not reasonable to maximize average utilization, as flow time will rise exponentially, and customers may have to wait too long. This is somehow counterintuitive and goes against standard management intuition, which emphasizes increasing resource utilization (de Treville and van Ackere, 2006). Figure 1 graphically shows the nonlinear relationship of waiting time and average utilization for different coefficients of variation. In case of very low variability (ca = ce = 0.5) the system can be highly utilized, as flow time remains low also with utilization of over 0.9. In case of moderate variability (ca = ce = 1) reasonable waiting time occurs already with utilization of 0.8, and in systems with high variability (ca = ce = 2) even a utilization of 0.6 results in very long waiting times.
Fig. 1 Waiting time depending on utilization
Dependency Between Performance of Production Processes and Variability
65
2.2 OM-Triangle Usually uncertainty in operations is addressed by installing buffers, like excess inventory or excess capacity. This idea is somehow extended by the so-called OMTriangle, derived from the G/G/1 queueing model stating that capacity, inventory and information are substitutes in providing customer service (Lovejoy, 1998; Schmidt, 2005). In particular, the OM-Triangle is obtained by fixing three extreme points on the waiting time curve in Figure 1. Figure 2 shows three points called inventory point, capacity point and information point. As inventory is direct proportional to flow time (Little’s Law) we can show inventory instead of flow time on the y-axis.
Fig. 2 OM triangle
Companies running their operations at the capacity point operate at low average utilization to be able to quickly respond to volatile demand. This is especially necessary in service industries such as emergency service, where waiting times are critical. Companies with high fixed capacity costs try to run their operations at nearly 100% utilization, and demand variability is buffered by high inventory. This makes sense for industries with durable products, meaning commodities such as paper, steel or petrochemicals. Finally, companies capable of reducing variability in their operations operate at the information point. The assumption behind this proposition is that by having better information it might be possible to significantly decrease variability, which means that the process can be run at higher utilization without increasing flow time (inventory). The most famous example for this point is Toyota. Where to operate on such curves is also the question in the study of Bitran and Morabito (1999). They underline the usefulness of these trade-off curves for the evaluation of strategic options as a function of resources required to provide customer service. Klassen and Menor (2007) provide empirical evidence for the ex-
66
Martin Poiger et al.
istence of this trade-off between capacity, inventory and variability by examining archival economic data on industry level.
2.3 Rapid Modeling Software – MPX Rapid modeling is a technique that is based on the idea that a process is made up of a network of queues (Enns, 1996). Furthermore it applies queuing relationships to come up with analytical performance results. On the one hand, demand and process variability are incorporated into the model and on the other hand, based on the used queuing models the degree of control over the probability distributions is limited. For our analysis we use the Rapid Modeling Software MPX. This software uses the node-decomposition approach, where a network is broken down in its workstations, each regarded as a G/G/m queue (Suri et al, 1995). Compared to simulation it is easy (without any programming expertise), quick to use, and generates quite good results for processes in steady state. Suri et al (1995) underline the advantage of using open queuing networks by stating that it is better to find approximate solutions to models close to reality, than finding exact solutions for models that are only rough approximations of reality (e.g., linear programming). MPX is successfully used in academic teaching (de Treville and van Ackere, 2006) and consulting, mainly for analyzing manufacturing systems. MPX guides the user intuitively through a complete process analysis, by requesting inputs of labor, equipment, products, and lot sizes, bill of materials, demand, variability, operations and routings. If the manufacturing process is modeled, performance measures like WIP and flow time can be computed almost instantaneous. Process alternatives can be evaluated by incorporating “what-if” scenarios to the validated model.
3 Empirical Analysis of the Impact of Demand Variability 3.1 Demand Variability By means of the following production processes we want to show the impact of reduced demand variability on process performance. This is highly relevant as generally speaking supply chain management tries to reduce harmful variability through information sharing and cooperation. It is commonly accepted that information sharing has a positive impact on supply chain performance. In particular, information sharing is subsumed normally by improved or enriched forecasting methods (reduced forecast error) which finally reduce demand variability. Overviews of models dealing with the value of information in supply chains are provided by Chen et al (2000) and Li et al (2005).
Dependency Between Performance of Production Processes and Variability
67
Particularly interesting in this context is the bullwhip effect, based on de de Kok et al (2005): The bullwhip is the metaphor for the phenomenon that variability increases as one move up a supply chain.
It is obvious that the bullwhip effect has a disturbing influence on performance upand downstream the supply chain process. Chen et al (2000) investigate the dependencies between forecasting and lead times in a simple supply chain. Boute et al (2006) show for a two-echelon supply chain how the bullwhip effect could be dampened. Hosoda and Disney (2006) analyze the variance amplification in a three-echelon supply chain model under the assumption of a first-order autoregressive consumer demand. Motivated by this research Reiner and Fichtinger (2009) investigate the influence of demand modeling on supply chain process performance by comparing different demand forecasting models. As already mentioned there are not too many empirical studies on process level, providing results on the impact of reduced demand variability on process performance. Therefore we want to analyze and quantitatively evaluate this impact by means of a real-world production process modeled in MPX.
3.2 Initial Situation of Frequency Inverter Assembling The process we use to illustrate the impact of demand variability is the production process of frequency inverters, produced in 9 different sizes: 9, 10, 11A, 11B, 12, 13, 14A, 14B, 15. These sizes are further differentiated by software and labeling, which has no impact on process times. Therefore, we analyze the production on product size level, and not on final brand level (overall 48 different final products are produced). The production process, which is mainly an assembly process, consists of the activities (1) assembly, (2) pre-inspection, (3) inspection and (4) finalizing (see Fig. 3). In the current situation the assembling is done on five parallel roller conveyors (RC0 – RC4), each for particular sizes. After the assembling a pre-inspection is conducted. This is done by one person for all conveyors with a mobile inspection device at the very end of each roller conveyor. After pre-inspection the inverter is transferred to one of two inspection stations (IN) by crane. After passing inspection the inverter is finalized at station MPFZ. Seven per cent of the products do not pass inspection and have to be repaired on the roller conveyor. The MPX analysis shows that some of the stations are highly utilized (above 90%), which lead to very long flow times, mainly driven by long waiting time for equipment. The highest utilized stations are the inspection stations and the roller conveyor RC0 and RC4. Table 1 shows flow time and WIP for the various sizes of the initial situation as well as of the improved process described in Section 3.3.
68
Martin Poiger et al.
Fig. 3 Process flow chart of Inverter production Table 1 WIP and flow time for various for the initial situation (IN) and for the improved process (IP) product sizes 9 10 11A 11B 12 13 14A 14B 15
waiting time waiting time process flow for equipment for labor time time IS IP IS IP IS IP IS IP IS IP [pieces] [pieces] [days] [days] [days] [days] [days] [days] [days] [days] 9.669 4.045 1.081 0.146 0.01 0.01 0.583 0.544 1.83 0.7 3.836 1.602 1.096 0.152 0.01 0.01 0.586 0.547 1.854 0.71 3.967 2.807 0.564 0.174 0.058 0.058 0.709 0.709 1.563 0.942 0.311 0.252 0.394 0.174 0.058 0.058 0.709 0.709 1.393 0.942 1.551 2.524 0.365 0.148 0.057 0.057 0.743 0.743 1.37 0.948 5.415 4.543 0.531 0.305 0.009 0.009 0.859 0.859 1.713 1.172 9.077 6.141 3.723 2.149 0.009 0.009 1.244 1.209 7.134 3.367 5.599 3.85 3.723 2.149 0.009 0.009 1.396 1.367 7.286 3.525 0.843 0.581 3.905 2.252 0.009 0.009 1.489 1.464 7.664 3.725 WIP
3.3 Process Improvements and Impact of Demand Variability Various process alternatives were evaluated to identify an appropriate design of the process after the analysis of the current situation. It turned out that two main improvements should be made. First, some activities of the roller conveyor should be transferred to a new station, and second, an additional inspection station should be
Dependency Between Performance of Production Processes and Variability
69
installed. Flow times and WIP of the improved process are shown in Table 1. The improved process performs much better in terms of WIP and flow time. Finally, we want to show the impact of different coefficients of variation on process performance. Varying is done by using a scaling factor in MPX to multiply the basic model coefficient of variation. Figure 4 shows that an increase as well as a decrease of the variability factor heavily impacts flow time of some of the products. Especially sizes 14A, 14B, and 15 (size 15 is shown in Fig. 4) are very sensitive to changed variability. This sensitivity is caused by high utilization, which means there is no safety capacity to address increased demand volatility.
Fig. 4 Flow time depending on variability factor
For the improved process variability sensitivity is much lower. Figure 5 shows that flow times are generally on a lower level, but further they nearly does not react in increased (scaled) variability.
Fig. 5 Flow time depending on different variability factors for the improved process
70
Martin Poiger et al.
These results show also that a comprehensive process evaluation has to take into consideration “robustness” of the results and not only the “ideal” solution based on average values. In the presented example the variability sensitivity has been applied. Based on this variation of the variability further evaluation model extensions in terms of different risk level analysis, e.g., process risks and disruption risks can be easily implemented.
4 Empirical Analysis of the Impact of Variability Caused by Batching 4.1 Theoretical Impact of Batching With the following production process we want to show explicitly the impact of internal variability induced by batch production. Producing in batches is necessary if different products are produced on the same work station and a setup is necessary to switch between the products. Batching is an important cause of variability and obviously influences flow time through a process. In processes using batching in a way that only complete batches are passed on to the next station, calculating flow time per part does not make sense. To know when a part is ready for the next station the flow time TB of a batch has to be calculated, because every part of a batch has to wait until the last part of the batch is finished. Flow time of a batch TB is the sum of waiting time tq , the setup time ts , the wait-in-batch time tqB and the process time per part t0 (5)(Hopp and Spearman, 2006). TB = tq + ts + tqB + t0 .
(5)
Setup time ts and process time per part t0 are independent from batch size, but waiting time tq and wait-in-batch time tqB depend on batch size. According to Section 2, tq mainly depends on utilization and variability. Batching influences utilization by its impact on average capacity of a station re . Equations (6) and (7) (Hopp and Spearman, 2006) show for given setup time ts that the lower the batch size Ns the higher mean effective process time te per part, and consequently, the lower average capacity re . te = t0 + re =
1 . te
ts Ns
(6) (7)
For given arrival rate ra decreased capacity re leads to higher utilization ρ and by that to longer waiting time tq (4). Utilization ρ for a batch process is defined by (8).
Dependency Between Performance of Production Processes and Variability
ts ra ρ = rate = ra t0 + = (Nst0 + ts ). Ns Ns
71
(8)
Batching also influences tq because it induces process variability. Equations (9) and (10) (Hopp and Spearman, 2006) show the impact of batch size Ns and setup time ts on the variance of mean effective process time σe2 , and on squared coefficient of variation c2e . Clearly, in case of high setup times a rising batch size decreases squared coefficient of variation c2e , as less setups are executed.
σe2 = σ02 + c2e =
σs2 Ns − 1 2 + t Ns Ns2 s
σe2 . te2
(9) (10)
Consequently, from a utilization point of view and also from a variability point of view, batch sizes should be as high as possible to minimize average waiting time tq . Unfortunately the second parameter of average flow time, affected by batch size, i.e. wait-in-batch time tqB , reacts the other way around, as it rises with increasing batch size (11). (11) tqB = (Ns − 1)t0 . Therefore it is necessary to calculate a somehow “optimal” batch size, which minimizes average flow time TB . Figure 6 shows the factors described above. Setup time and process time are independent from batch size and remain constant. Waiting time (for resource) first sharply decreases with batch size and then slightly increases (due to the impact of
Fig. 6 Process performance measures depending on batch size
72
Martin Poiger et al.
batch size on the coefficient of variation – see (9)), and wait-in-batch time linearly increases with batch size. These factors lead to average flow time having this particular shape, with first a sharp decrease and afterwards a nearly linear increase. From that follows that first a minimum batch size is necessary to be able to produce the requested demand at all, i.e. having utilization below 1. As soon as there is enough capacity flow time rises nearly proportionally with batch size. Consequently, it is not easy to intuitively generate an appropriate batch size for a longer production process or a production network, consisting of several work stations with various routings. The following case shows how MPX can be used to analyze a real world production process by varying the batch size and by showing the impact with respect to average flow time.
4.2 Initial Situation of Glass Top Assembling The considered process is the assembly process of sliding glass top systems. This process is executed in an Austrian subsidiary, which belongs to a multinational company, dealing primarily with polymer solutions. The glass top is produced for another Austrian company, manufacturing industrial refrigerators. The top is delivered in six different variants (A1, A2, A3, B1, B2, and B3) and in sets of one upper top and one lower top. Both the upper and the lower top consist of a glass pane and a plastic frame. Figure 7 shows the simplified process flow diagram of the assembly procedure. The important point to mention is that for a particular batch of glass top sets first all upper tops are assembled and packed into a box. Then after a setup all lower tops are assembled and packed into the same box with the upper tops to complete the sets.
Fig. 7 Assembly process of glass tops
The process was again modeled and analyzed by MPX. Table 2 shows the quarterly production, the used batch size and the flow time calculated by MPX. Batch sizes were calculated and fixed by the company using the standard EOQ-model, but without taking into account the model’s restrictions and assumptions. Especially because of consideration of very high and questionable fixed costs the batch size is very high and of course responsible for the long flow time (along some other factors). During the process analysis many alternatives were evaluated and compared. It turned out that the process could be really enhanced by adding a second assembling table enabling parallel assembling of upper and lower tops. By that, one part of the long waiting time can be cancelled as upper and lower tops have their own
Dependency Between Performance of Production Processes and Variability
73
Table 2 Production quantities, batch sizes and flow time per batch for the six different glass top sets Product name Demand per quarter Batch size Flow time [h] A1 2921 288 153.7 A2 13696 720 198.5 A3 6051 277 160.9 B1 2232 156 154.8 B2 3303 98 112.5 B3 594 24 72.5
equipment and do not have to wait for each other. The analysis of course brought some further measures to improve the process which are not discussed. In fact we want to focus on the impact of batching and the batch size on process performance.
4.3 Process Improvements and Impact of Batching To explicitly illustrate the impact of batch size we use the model of the improved process, in which the additional assemble table and some other measures are implemented. Figure 8 shows the flow time for a batch of A1 depending on the batch size.
Fig. 8 Flow time for a batch of A1 depending on batch size
The minimum batch size which is necessary to be able to produce at all is 37. Batch sizes below 37 lead to an over utilized process. From batch sizes 37 to 40 flow time sharply decreases with batch size. With batch sizes between 43 and 48 the flow time is at its minimum of 7.5 hours. From batch size 49 the flow time increases
74
Martin Poiger et al.
nearly linearly with batch size. From that follows that the batch size used in the initial situation is much too high in terms of flow time. This example clearly shows that it really makes sense to search for the flow time minimal batch size. In this case using inappropriately the EOQ-model as well as state of the art extensions with at least questionable cost information resulted in much too high flow times. Up to now the process has been improved also in reality. The company has installed the second assemble table, reduced the setup time from 30 minutes to 18 minutes and uses now much lower batch sizes. The sum of process improvements doubled the output of the assembly station.
5 Conclusions In this study, two real-world production processes are analyzed, using empirical data from process level. By means of these processes we want to show the impact of variability on process performance. In particular we use one process to show the impact of external variability in terms of demand variability, and the other process to show the impact of internal variability induced by batching. For the analysis we use MPX, a rapid modeling software based on open queueing networks. The process improvements concerning demand variability show two things. First, a process properly designed, i.e. with sufficiently high capacity, is much more robust against demand variability. Second, decreasing demand variability, achieved by VMI or other information sharing concepts, leads to an improved process performance in terms of WIP and flow time. The presented approach to identify the “ideal”, i.e. flow time minimal batch size is an interesting alternative compared to existing approaches based on mathematical optimization in a deterministic environment. The advantages of the presented approach are on the one hand, low hardware and time requirements for solution finding, compared to the classical approaches. On the other hand, it is possible to take the variability of the input-parameters into account. Finally, it has to be stated that with our approach it is not possible to find the mathematically optimal solution. Rather, we find an ideal solution in terms of robustness. We assume, following Hopp and Spearman (2006) that it is better to find a solution which works just fine in many cases than an optimal solution which might be very sensitive to any variation of the input-parameters.
References Bertrand J, Fransoo J (2002) Operations management research methodologies using quantitative modeling. International Journal of operations and production management 22(2):241–264
Dependency Between Performance of Production Processes and Variability
75
Bitran G, Morabito R (1999) An overview of tradeoff curves in manufacturing system design. Production and operations management 8(1):56–75 Boute R, Disney S, Lambrecht M, Van Houdt B (2006) An integrated production and inventory model to dampen upstream demand variability in the supply chain. European Journal of Operational Research 178(1):121–142 Chen F, Drezner Z, Ryan J, Simchi-Levi D (2000) Quantifying the bullwhip effect in a simple supply chain: The impact of forecasting, lead times, and information. Management science 46(3):436–443 Enns S (1996) Analysis of a process improvement path using rapid modeling. Total Quality Management & Business Excellence 7(3):283–292 Hopp W, Spearman M (2006) Factory physics: foundations of manufacturing management. McGraw-Hill/Irwin Hosoda T, Disney S (2006) On variance amplification in a three-echelon supply chain with minimum mean square error forecasting. Omega 34(4):344–358 Klassen R, Menor L (2007) The process management triangle: An empirical investigation of process trade-offs. Journal of Operations Management 25(5):1015–1034 de Kok T, Janssen F, van Doremalen J, van Wachem E, Clerkx M, Peeters W (2005) Philips electronics synchronizes its supply chain to end the bullwhip effect. Interfaces 35(1):37–48 Li G, Yan H, Wang S, Xia Y (2005) Comparative analysis on value of information sharing in supply chains. Supply Chain Management: An International Journal 10(1):34–46 Lovejoy W (1998) Integrated operations: a proposal for operations management teaching and research. Production and Operations Management 7:106–124 Mapes J, Szwejczewski M, New C (2000) Process variability and its effect on plant performance. International Journal of Operations and Production Management 20(7):792–808 Mitroff I, Betz F, Pondy L, Sagasti F (1974) On managing science in the systems age: Two schemas for the study of science as a whole systems phenomenon. Interfaces 4(3):46–58 Reiner G, Fichtinger J (2009) Demand forecasting for supply processes in consideration of pricing and market information. International Journal of Production Economics 118(1):55–62 Rungtusanatham M, Choi T, Hollingworth D, Wu Z, Forza C (2003) Survey research in operations management: historical analyses. Journal of Operations management 21(4):475–488 Schmidt G (2005) The OM Triangle. Operations Management Education Review 1(1):87–104 Silver E (2004) Process management instead of operations management. Manufacturing & Service Operations Management 6(4):273–279 Suri R, Diehl G, de Treville S, Tomsicek M (1995) From CAN-Q to MPX: Evolution of queuing software for manufacturing. Interfaces 25(5):128–150 de Treville S, van Ackere A (2006) Equipping students to reduce lead times: The role of queuing-theory-based modeling. Interfaces 36(2):165
Improving Business Processes with Rapid Modeling: the Case of Digger Reinhold Schodl, Nathan Kunz, Gerald Reiner and Gil Gomes dos Santos
Abstract Rapid Modeling is a method for modeling operational processes as a network of queues and analyzing them by applying queuing theory. Despite its great potential for solving problems in operations management, Rapid Modeling receives rather limited attention among practitioners. Therefore, this paper presents a teaching case for students and managers to illustrate the possibilities of Rapid Modeling. Moreover, the teaching case aims to raise awareness of the importance of considering both financial aspects and process management principles when improving processes. For that purpose, theoretical foundations for an integrated analysis and evaluation are discussed and applied in a practical context. The system under study is a real-life non-profit organization that is challenged by its growth strategy. The paper demonstrates how software based on queuing theory can be applied to find a solution in a quick and structured way. Key words: teaching case, operations management, queuing theory, Rapid Modeling
Reinhold Schodl (B), Nathan Kunz, Gerald Reiner and Gil Gomes dos Santos Enterprise Institute, Faculty of Economics, University of Neuchˆatel, Avenue A.-L. Breguet 1, 2000 Neuchˆatel, Switzerland, e-mail:
[email protected] Nathan Kunz e-mail:
[email protected] Gerald Reiner e-mail:
[email protected] Gil Gomes dos Santos e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 6,
77
78
Reinhold Schodl et al.
1 Theoretical Foundations A fundamental question of operations management is which level of capacity is appropriate for production. In other words, how many employees and machines are required to meet expected demand? As an introduction for the teaching case, the theoretical aspects of these questions are discussed in this chapter. Capacity utilization is an indicator measuring to what degree provided capacity is actually used. Therefore, capacity utilization helps to identify whether there is too much or too little capacity. Capacity utilization can be expressed as the ratio between actual working time and available time of labor and machinery. Labor and machinery costs are important contributing factors to the cost of goods sold and, as a consequence, determine profit or loss. Therefore, determining the right capacity level has great financial implications. From a financial point of view, high productivity is a major goal. Productivity expresses the output of the production per one unit of input, where the measurement units can be defined as quantity or time. For instance, labor productivity can be expressed as the ratio between produced output in time equivalents and available time of labor. Generally, there is a positive relationship between utilization and productivity, as illustrated in Fig. 1 (see Al-Darrab, 2000). From a financial perspective, utilization should be maximized because a high utilization level implies a small proportion of unused capacity, which leads to high productivity. For a given utilization level, productivity can be further increased by improving efficiency (see Al-Darrab, 2000). Efficiency is an indicator for “doing things right” and can be expressed as the ratio between produced output in time equivalents and actual working time of labor or machinery. The impact of efficiency can be seen in Fig. 1. To conclude, managers must address utilization and efficiency to ultimately achieve good financial results.
Fig. 1 Productivity
However, obeying that rule will not necessarily result in a successful business. Apart from “doing things right,” an additional aim must be achieved. That aim
Improving Business Processes with Rapid Modeling: the Case of Digger
79
is high effectiveness, or “doing the right things.” Regardless of the industry and company type, customer requirements must always be taken into account when production processes are designed and managed. Customer satisfaction is defined by not only the physical quality of the products but also the quality in terms of time. Delivery speed and reliability are important competitive factors and, to a great extent, a direct result of the production processes’ design. The initial question, namely which level of capacity is appropriate for production, must be answered from a process management point of view as well. For that purpose, the relationship between capacity utilization and lead time will be discussed. A production order’s lead time is the time span between entering and leaving the production process. The lead time consists of process times, that is, the time spent adding value by utilizing labor and machines, and waiting times, which occur when labor or machines are required but not available. The probability that a resource will be requested but unavailable, resulting in waiting time, increases with higher capacity utilization. Figure 2 illustrates the relationship between capacity utilization and lead time (see Anupindi et al, 1999). Lead time
High variability
Low variability Process time Utilization 100%
Fig. 2 Lead time
Accordingly, for a given maximum average lead time, which derives from customer requirements, the maximum capacity utilization can be determined. The relationship between capacity utilization and lead time is not linear because waiting time rises progressively with increasing utilization. The variability of time (e.g., process times, setup times, the time between the arrival of orders, and machine failure times) is the reason that lead time approaches infinity when capacity utilization approaches 100% (see Hopp and Spearman, 1996). Variability means that for a group of entities, the value of an attribute varies between the entities. For instance, consider a batch in production as an entity and the time needed to set up a machine to process that batch as its attribute. In a real production system, the setup time would usually vary for each batch. Variability can be the effect of random (e.g., quality problems and machine failures) or controllable (e.g., diverse product portfolio and planned maintenance) causes. The coefficient of variability is a common measure for variability
80
Reinhold Schodl et al.
and is defined as the ratio of the standard deviation to the mean. Figure 2 shows the impact of variability on lead time. If variability can be reduced (e.g., through employee training and standardized products), the lead time for a given utilization can be decreased (see Suri, 1998). To conclude, managers must monitor utilization and variability to achieve process outputs that are in line with customer requirements. It follows from the above discussion that the search for the optimal capacity must not only focus on financial aspects but also take into account process management principles. Such an integrated analysis will be applied to solve the problem described in the next chapter.
2 Problem Description 2.1 Company Background Numerous countries suffer the plague of anti-personnel mines. To restore decent living conditions for many people around the world, millions of mines must be removed. However, manual demining can only clear 5 to 50 square meters per day. The Digger Foundation (see Digger, 2010) is a Swiss humanitarian organization whose aim is to develop and manufacture machines for demining in order to increase working efficiency and reduce the danger for demining personnel. In a market dominated by commercial companies, the Digger Foundation is the only non-profit manufacturer of demining machines. Digger’s current product, the DIGGER D-3, is an armored machine that can withstand detonations, while being lightweight enough to be transported into areas with poor road infrastructure. Furthermore, it can be remote controlled in order to guarantee total safety for the operator, who stays a safe distance away. The DIGGER D-3 is not the only demining machine on the market, but several unique characteristics differentiate it from those of competitors. This machine was designed and manufactured to meet the specific needs of humanitarian mine clearing, such as affordable purchase price, low operating cost, simple maintenance, and high flexibility due to its multi-tool concept. Digger’s production facility is located in the canton of Bern, in Switzerland, in an old military compound. In order to benefit from high flexibility at low fixed costs, Digger works with several subcontractors for the supply of prefabricated components. In the past, only the final assembly has been carried out in Digger’s workshop. However, Digger has started to internalize almost all welding tasks by hiring welders. This configuration allows Digger to better adapt production lead time to customer requirements.
Improving Business Processes with Rapid Modeling: the Case of Digger
81
2.2 Challenge Due to its success, Digger’s product demand is steadily increasing. This requires the company to adapt its strategy to fulfill its humanitarian mission as effectively as possible. Since being established, Digger has received financial support from thousands of donors from both the public and private sector. However, donations are neither easily scalable nor predictable in nature. Therefore, Digger’s long-term objective is to become financially independent and rely primarily on the sale of its products. However, this increases the pressure to raise revenues while keeping production cost low. In recent years, production capacity has been increased by the employment of more salaried staff and the acquisition of additional equipment. Forecasts suggest capacity still might not be sufficient in the future. An important success factor for Digger is its capacity for fast and reliable delivery. However, there are concerns that its labor and machinery cannot cope with increasing demand, resulting in too long delivery times. This could endanger Digger’s growth plans. To successfully supply as much demining equipment as possible, Digger has to find the right balance between acceptable cost and sufficient capacity. Consequently, the following questions must be answered: How much labor and equipment is adequate for the planned growth strategy? Can process performance be improved without increasing capacity?
2.3 Production Details1 Digger produces one main product, the DIGGER D-3, with an expected lead time of, at maximum, six months. Details of product demand and the bill of material can be seen in Tables 1 and 2, respectively. Table 1 Demand data Year 0
Year 1
Year 2
3 units
6 units
9 units
The strategy for manufacturing the DIGGER D-3 is make-to-order. Production operates 8 hours per day, 260 days per year. Data describing the current production’s resources is represented in Table 3, and the production steps are given in Table 4. For each activity, labor and machinery are simultaneously utilized. The production lot size of the product is 1, while the lot size of each component equals the quantity needed to manufacture 1 unit of the product. Batch sizes for the transport between 1
In order to respect Digger’s confidentiality requirements, the market, operational, and financial figures presented in this case were changed and do not represent the actual circumstances.
82
Reinhold Schodl et al.
Table 2 Bill of material
Product / Components DIGGER D-3 ├Translation │ ├Tensioner Support │ ├Rear Support │ └Oscillating Support ├Rear Hood ├Front Hood ├Internal Assembly 5 │ ├Internal Assembly 4 │ │ └Internal Assembly 3 │ │ ├Internal Assembly 2 │ │ │ ├Internal Assembly 1 │ │ │ │ └Chassis │ │ │ │ └Hull │ │ │ └Power Train │ │ │ └Engine │ │ └Arm Support │ └Diesel Tank ├Support Triangle ├Front Arm └Electronics ├Electronic Control Unit ├Remote Control Unit └Cable Strand
Units 1 1 2 2 8 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
stations are the same as the production lot sizes. The variability of the production system is generally moderate. So for the time between the arrival of orders, the operations’ setup and process times, and the machines’ repair times, the coefficient of variability is generally estimated to equal 1. Financial data related to revenue, as well as cost for labor, machinery, and raw materials and supplies, is summarized in Table 5.
3 Solution For different levels of demand, based on the pursued growth strategy, suitable capacity levels in terms of labor and machines have to be determined. Digger is a humanitarian organization, but profit must be generated and reinvested to achieve future growth. Therefore, the right amount of capacity must be found to maximize profit in the long run.
Improving Business Processes with Rapid Modeling: the Case of Digger
83
Table 3 Resource data Resource
Type
Number in Pool
Unavailability
Station
Electric Lab
Machine
1
On average 10 h every 1000 h
ElectronicAssembling
Hoist 1
Machine
1
On average 10 h every 500 h
WeldingAssembling
Hoist 2
Machine
3
On average 10 h every 500 h
GeneralAssembling
Painting Cabin
Machine
1
On average 20 h every 500 h
Painting
Welding Machine
Machine
2
On average 20 h every 1000 h
Welding
Assembler
Labor
3
12% of total time
Painting, General-Assembling
Electronics Technician
Labor
1
12% of total time
ElectronicAssembling
Welder
Labor
2
12% of total time
Welding-Assembling, Welding
profit = (revenue per unit − variable cost per unit) × output − fixed cost. Moreover, lead time constraints, resulting from customer requirements, must be taken into account. average lead time ≤ accepted average lead time. In this way, the financial and process perspective are integrated when determining the optimal capacity. Dynamic cause and effect relationships and process variability prevent a simple deterministic calculation of the output and lead time. In general, simulation can be applied, but the building and application of simulation models is often time consuming and fairly complex. Rapid Modeling can help to overcome those limitations. Rapid Modeling is a method with which to model operational processes as a network of queues and analyze them based on analytical queuing theory (see Suri, 1989). Specialized software can be used to model and analyze production processes in a quick and intuitive way (see Suri et al, 1995; Rabta et al, 2009). For this case study, the software “Rapid Modeler” is used, which has been developed within the project “Keeping Jobs in EU,” funded by the European Community’s Seventh Framework Programme (see University of Neuchˆatel, 2010).
84
Reinhold Schodl et al.
Table 4 Routings Product / Component DIGGER D-3 Translation Tensioner Support
Rear Support
Oscillating Support
Rear Hood
Front Hood Internal Assembly 5 Internal Assembly 4 Internal Assembly 3 Internal Assembly 2 Internal Assembly 1 Chassis Hull Power Train Engine Arm Support
Diesel Tank
Support Triangle
Front Arm Electronics Electronic Control Unit Remote Control Unit Cable Strand
Step
Station
Setup Time per Batch
Process Time per Unit
1 1 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 1 1 1 1 1 1 2 3 1 1 1 2 3 1 2 3 1 2 3 1 2 3 1 1 1 1
General-Assembling General-Assembling Welding-Assembling Welding Painting Welding –Assembling Welding Painting Welding-Assembling Welding Painting Welding-Assembling Welding Painting Welding-Assembling Welding Painting General-Assembling General-Assembling General-Assembling General-Assembling General-Assembling General-Assembling Welding-Assembling Welding Painting General-Assembling General-Assembling Welding-Assembling Welding Painting Welding-Assembling Welding Painting Welding-Assembling Welding Painting Welding-Assembling Welding Painting Electronic-Assembling Electronic-Assembling Electronic-Assembling Electronic-Assembling
20 h 20 h 5h 5h 1h 2h 5h 1h 2h 5h 1h 5h 5h 1h 3h 5h 1h 10 h 4h 4h 4h 4h 10 h 5h 10 h 2h 5h 2h 3h 5h 1h 2h 5h 1h 2h 5h 1h 3h 10 h 2h 5h 5h 2h 10 h
320 h 120 h 10 h 25 h 3h 5h 20 h 1h 3h 15 h 1h 15 h 80 h 9h 10 h 50 h 6h 35 h 20 h 30 h 28 h 18 h 35 h 25 h 110 h 18 h 12 h 13 h 18 h 55 h 7h 8h 30 h 5h 4h 20 h 3h 18 h 100 h 6h 65 h 30 h 33 h 110 h
Improving Business Processes with Rapid Modeling: the Case of Digger
85
Table 5 Financial data Financial Parameter
Value
Revenue Labor cost - all types Depreciation - Hoist Depreciation - Welding Machine Depreciation - Painting Cabin Depreciation - Electronic Lab Cost of raw materials and supplies
520 000 CHF/unit 80 000 CHF/year 10 000 CHF/year 35 000 CHF/year 30 000 CHF/year 20 000 CHF/year 300 000 CHF/unit
By analyzing the effects of different capacity configurations, solutions can be determined, which lead to high profits and comply with lead time restrictions. When comparing different scenarios, capacity utilization should be examined to find a satisfactory solution in a structured way. To avoid unbalanced production, capacity utilization should be below a chosen threshold level for all stations. Table 6 shows the number of resources in the current situation (i.e., Year 0) and that of a possible solution for the next 2 years. Operational and financial results based on these capacity levels are summarized in Table 7 for each year. With the chosen amount of labor and machinery for Year 1 and Year 2, output in accordance with the defined growth strategy can be achieved, lead time will fall below the accepted value, and profit can be generated. Note that the applied method allows for a comparison of the effects of different scenarios but does not guarantee that an optimal solution will be found. Therefore, other reasonable configurations might exist. Table 6 Capacity of solution Resource Electronic Lab Hoist 1 Hoist 2 Painting Cabin Welding Machine Assembler Electronics Technician Welder
Year 0
Year 1
Year 2
1 1 3 1 2 3 1 2
2 2 4 1 4 5 2 5
2 2 6 1 5 6 2 6
So far the focus has been on the adjustment of process capacity. But can process performance also be improved without changing capacity levels? Generally, that is possible by measures increasing efficiency and by measures reducing variability. Different cases demonstrate how organizational measures can improve performance before investing in additional capacity (see Bourland and Suri, 1992; Enns, 1996). Efficiency can be positively influenced, for instance, through the use of advanced technology. For example, Digger could introduce new welding techniques
86
Reinhold Schodl et al.
Table 7 Results of solution Measure
Year 0
Year 1
Year 2
Output Lead time Revenue Depreciation Labor cost Cost of raw material and supplies Profit
3 units 5.8 months 1 560 000 CHF 160 000 CHF 480 000 CHF 900 000 CHF 20 000 CHF
6 units 5.9 months 3 120 000 CHF 270 000 CHF 960 000 CHF 1 800 000 CHF 90 000 CHF
9 units 5.9 months 4 680 000 CHF 325 000 CHF 1 120 000 CHF 2 700 000 CHF 535 000 CHF
that shorten process times. Variability can be addressed, for example, through learning effects. If Digger scales up production, employees might have more routine, leading to less variable process times. Apart from these examples, other possibilities for influencing efficiency and variability should be considered. The effects on process performance can be analyzed with Rapid Modeling by adjusting the respective parameters in the model.
4 Conclusion Decision makers in the field of operations management are frequently challenged by complex problems, which have to be solved with limited time and resources. Therefore, managers and students should be aware of possibilities allowing for the improvement of processes with reasonable effort, while also avoiding oversimplification. This teaching case outlines the potential of Rapid Modeling and highlights the importance of an integrated analysis of processes by combining financial and operational performance measures. The solution of the real-life problem of this teaching case illustrates basic concepts in operations management as well as the possibilities provided by Rapid Modeling software. Acknowledgements This work is supported by the European Union’s Seventh Framework Programme (Marie Curie Industry-Academia Partnerships and Pathways, Keeping Jobs in EU, project number 217891) and the University of Neuchˆatel (Initiative de soutien a` des d´emarches d’enseignement innovantes 2009/2010).
References Al-Darrab I (2000) Relationships between productivity, efficiency, utilization, and quality. Work Study 49(3):97–103
Improving Business Processes with Rapid Modeling: the Case of Digger
87
Anupindi R, Deshmukh S, Chopra S, van Mieghem J, Zemel E (1999) Managing business process flows. Prentice-Hall, Upper Saddle River Bourland K, Suri R (1992) Spartan industries. Tech. rep., Case Study, Tuck School of Business, Dartmouth College, Hanover, NH Digger (2010) Digger D.T.R. URL http://www.digger.ch Enns S (1996) Analysis of a process improvement path using Rapid Modeling. Total Quality Management & Business Excellence 7(3):283–291 Hopp W, Spearman M (1996) Factory Physics: foundations of manufacturing management. Irwin Inc., Chicago Rabta B, Alp A, Reiner G (2009) Queueing networks modelling software for manufacturing. In: Reiner G (ed) Rapid Modelling for Increasing Competitiveness: Tools and Mindset, Springer, London Suri R (1989) Lead time reduction through Rapid Modeling. Manufacturing Systems 7(7):66–68 Suri R (1998) Quick Response Manufacturing: a companywide approach to reducing lead times. Productivity Press Suri R, Diehl G, de Treville S, Tomsicek M (1995) From CAN-Q to MPX: Evolution of queuing software for manufacturing. Interfaces 25(5):128–150 University of Neuchˆatel (2010) Keeping Jobs in Europe. URL http://www2.unine. ch/iene-kje
Part III
Rapid Modelling in Services
Quick Response Service: The Case of a Non-Profit Humanitarian Service Organization Arda Alp, Gerald Reiner and Jeffrey S. Petty
Abstract The focus of this paper is to explore and discuss the applicability of traditional operations management principles within the context of humanitarian service operations (HSO), illustrated by a nonprofit humanitarian service organization (HNPSO). We want to make two major contributions related to performance improvement based on lead time reduction and performance measurement. First, we develop an improvement framework to analyze and reduce the service lead time in parallel with provision of an improved capacity management. The results of this study show that lead time reduction strategies in combination with queuing theory based modeling techniques (Suri, 1998, 2002; Reiner, 2009) help the HNPSO managers effectively manage their service providing processes. Such an integrated and profound capacity management enables organization to deal with short-term demand fluctuations and long-term growth. In this way managers can find the balance between the provision of daily operations as well as the maintenance of monetary income to secure the growth of the organization and continuous improvement. Furthermore, we highlight the benefits and challenges of an aggregated performance measurement approach in a HNPSO. Our approach links operational, customer oriented, and financial performance measures, gives management competitive advantage more relevant than that of a traditional performance system. Considering the relatively limited operations management applications in nonprofit performance measurement systems, this paper contributes to both research and practice.
Arda Alp (B) and Gerald Reiner Entreprise Institute, University of Neuchˆatel, Rue A.L. Breguet 1, 2000 Neuchˆatel, Switzerland, e-mail:
[email protected] Gerald Reiner e-mail:
[email protected] Jeffrey S. Petty Lancer Callon Ltd., Suite 298, 56 Gloucester Road, UK-SW7 4UB London, e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 7,
91
92
Arda Alp et al.
Key words: non-profit humanitarian service organization, performance measurement, lead time reduction, rapid modeling
1 Introduction During recent years humanitarian organizations have become the focus of more and more interest among researchers in the field of operations management as well as supply chain management. Since the 90ies, academic literature in these fields made remarkable developments (Chandes and Pach´e, 2010). Furthermore, the non-profit sector is increasingly perceived as ‘big business’ and is also becoming important in several countries (US: ∼10% of GDP, Ireland:∼8.8% of GDP, UK:∼2.25% of GDP Wainwright, 2003; Gunn, 2004; Taylor et al, 2009). Our study will focus on specific organizations in this broad field, i.e., humanitarian nonprofit service organizations (HNPSO) will be investigated in more detail. Our aim is to explore and discuss the applicability of traditional operations management principles within the context of humanitarian service operations (HSO). Furthermore, we want to make two major contributions related to performance measurement and performance improvement based on lead time reduction. As previously mentioned, traditional operations management (developed in a manufacturing context) may not be sufficient because, owing to the differences between manufacturing and services, specific concepts and approaches should be developed for service operations management. Craighead and Meredith (2008) and Voss (2005) provided some evidence for this requirement. Within service environments and the related processes can typically be characterized as ”make-to-order”. Therefore, specific strategies have to applied, e.g. demand or process variability cannot be hedged by safety inventory. For analyzing ”make-to-order” systems queuing theory plays an important role. A queuing system usually consists of an arrival process, a production or service process, and a queue (Hopp and Spearman, 2007). This model clearly shows that waiting times rise with process time, with average utilization and with variability. For managers it follows that in case of high variability it is not reasonable to maximize average utilization, as flow time will rise exponentially, and customers may have to wait too long. Hence, an important performance measure in service operations is the waiting time. Furthermore, (Bielen and Demoulin, 2007; Davis and Vollmann, 1990) provided evidence for the strong link between waiting time and customer satisfaction. It is well known in standard Operations Management (OM) literature that waiting times can be reduced by integrated demand and supply management (Heikkil¨a, 2002). We want to investigate how this approach can be transferred to HSO in general as well as to HNPSOs more specifically. The performance measurements of a HSO is an important issue as donors require accounts of activities, particularly from HNPSO, without forgetting that the clients (volunteers, affected populations, etc.) run the risk of being the silent victims of poor performance in terms of processes and output (Chandes and Pach´e, 2010).
Quick Response Service
93
A fundamental idea in the field of OM for the evaluation of process improvement is to first analyze the alternatives based on performance measures (e.g. time, WIP, etc.). Afterwards these performance measures will provide the input for the financial evaluation. We will show that this approach (in particular the performance evaluation) is vital for nonprofit organizations (NPOs) and voluntary and charitable organizations (VCOs), as opposed to relying upon the classical financial performance measures. Current performance measures of nonprofit organizations are accountability focused with weak linkages to their continuous improvement strategy or without any link to operational, customer-oriented measures (Moxham, 2009a; Poister, 2003). A recent study of Oloruntoba and Gray (2009) indicates that only a limited number of studies focus on the link between customer satisfaction and non-profit service organizations/providers performance. Beamon and Balcik (2008) presented some performance metrics in the humanitarian organization context that addresses the above mentioned problem. The main aspect is the differentiation between efficiency (utilization, costs, etc.) and effectiveness (customer satisfaction) dimensions. What managers need to do is to find the cause and effect relationships between the different performance measures rather than choosing between them (Behn, 2003; Banker et al, 2000). Our overall objective is to develop and present an approach for a HNPSO that enhances their operations and, ultimately, increases their performance. This increased performance should improve their competitiveness and positioning among other HNPSOs. Since customer satisfaction is a step towards competitiveness (Johri, 2009), our minor objective is to improve their customer satisfaction (internal and external) by matching capacity with demand (efficient management of capacity). At this point we demonstrate how lead time reduction strategies and rapid modeling techniques (De Treville et al, 2004; Suri, 1998, 2002; Singh, 2009; Reiner, 2009) can help VCOs and NPOs to achieve these objectives. Recent studies in the VCO and NPO sector (Taylor et al, 2009; Moxham and Boaden, 2007; Moxham, 2009b) underscore the importance of the link between an organization’s improvement strategy, financial decisions, and its performance measurement system. NPOs and VCO’s gain more competitive advantage if the performance measurement is used as a part of the improvement strategy (Moxham, 2009a,b). Therefore, NPOs and VCO’s need more support to link their continuous improvement strategy and performance measurement systems. This study will be of interest to both managers and academics because our findings add to limited knowledge on OM, rapid modeling, and lead time reduction techniques in the non-profit sector. In Section 2, we explain the improvement framework, focusing on the analysis phase and the modeling. In Section 3, we provide an illustration of the improvement framework. We review the current status of the HNPSO and describe how we implement the analysis and modeling within the organization. We then discuss our initial findings and highlight improvement possibilities. Finally, in Section 4, we provide our conclusions, highlight limitations, and discuss further research possibilities.
94
Arda Alp et al.
2 Improvement Framework: Analysis Phase and Modeling In this section we would like to introduce a generic improvement framework that can be used for HNPSOs. The starting point is the analysis phase, where we use several observations to analyze the current state of the HNPSO based on three main subject areas. (i) Communication: We will examine how the business processes are connected to each other and whether the communication is completed effectively or not. If not, what makes for effective communication? We also focus on how the HNPSO’s departments and employees transfer information internally and how this information is stored. (ii) Capacity management: We will analyze how the HNPSO can meet current and future business requirements in a cost-effective manner using their existing capacity. At this point, it is very crucial for the HNPSO’s management to know whether existing capacity is enough to achieve the organization’s goals. We also determine which activities should be focused on in order to improve the organization’s performance and efficiency. (iii) Performance management: Here we pose two fundamental questions. Do performance measures adequately give information about services and the processes? Do performance measures provide sufficient information necessary to make smart management decisions? As we have mentioned above, it is necessary to identify the cause and effect relationships between the different performance measures. Furthermore, we have to collect specific data that is necessary for the generation of the analysis model to support the generic improvement framework, namely: • • •
Process flow data Information flow data Performance measures
We use queuing networks software tools Rapid Modeler (http://www.unine.ch/ienekje) and MPX, to build the analysis model. These queuing networks software tools are designed for manufacturing as well as service processes. The model development process is automatic and embedded into the software (Rabta et al, 2009). The idea behind these tools is based on modeling processes as a network of queues considering the interaction of the nodes. Analysis is based on principles of queuing theory-node decomposition approaches. The analytical model uses several types of data (i.e. demand data, routing data, resource data, bill of material, production lot sizes) in order to yield critical process performance measures such as capacity utilizations, process lead times, WIP, etc. (Schodl, 2009). Before we are able to analyze different process improvements we have to ensure that the model behaves as intended (verification) and that the model provides the same output data as the real system (validation) (see in detail Kelton et al, 1998; Law et al, 1991). Accurate decision support is only possible with a ‘valid’ model which is an accurate representation of the system being modeled. Thus, verification and validation are crucial to ensure that the decisions made with the modeling tools are consistent with those that would be made by physically experimenting with the
Quick Response Service
95
system (Fig. 1). For instance, validation can be done by comparing the mean production lead time with actual system’s average lead time (Schodl, 2009). In the next section we will highlight the validation of our model in greater detail. Validation Verification Establish credibility System
Conceptual model
Analysis and data
Modeling
Establish credibility
Validation
Rapid modeler
‘Correct’ results Running model
Results implemented
Sell results to management
Fig. 1 Relationship of verification, validation ans establishing credibility (adapted from Chapter 5, pp. 298, Law et al (1991))
3 Illustration of the Improvement Framework 3.1 Current Status of the Organization In this section we describe the HNPSO case in detail. In pursuit of its mission, this organization employs a holistic business model which integrates international, local, and online programs that are available to volunteers worldwide. Each program provides meaningful volunteer experiences (i.e., cross-cultural relationships) and support (i.e. development projects, teaching, medical aid, etc.) for communities in over 40 countries (developed as well as developing countries). Here we will describe the current status of HNPSO within the context of the aforementioned three subject areas: (i) Communication: Current communication and synchronization problems internally among program coordinators; externally with customers- lead to time and capacity related inefficiencies. Some of these inefficiencies come from waiting times: waiting for documents/information from colleagues (in progress) or for approval (i.e. signature). Lack of coherence among departments, including ill-defined, unstructured ways of communication, lack of common documentation and information, causes extra workloads, repeated activities and increased processing times. This naturally leads to unexpected delays, jobs waiting in queues, and over utilization of the labor force. (ii) Capacity management: In the context of the HNPSO’s programs, fluctuations in customer applications during peak periods create a capacity complication. This problem leads to increased workloads, unexpected waiting times, increased response times and finally there is an impact on internal and external customer satisfaction.
96
Arda Alp et al.
Management needs better decision support for the allocation of the current workforce to solve the problem of potential customer loss. Currently it is not clear if the current workforce is sufficient for proper customer tracking and other activities. (iii) Performance management: Currently two major points are not very clear for the management team. Do the HNPSO’s performance measures adequately give information about the organization’s services and the processes? Do the HNPSO’s performance measures provide sufficient information necessary to make smart management decisions? More attention is needed on how their performance measures are defined and being used. Relatively, there is less importance placed on operational measures and inadequate focus on customer related measures. As such, much effort is needed on customer related evaluations and customer satisfaction related definitions. In this sense, tracking of customers needs to be integrated with the customer related measures. Also, the HNPSO management team should consider performance criteria important for their funders, which are primarily donors, since, donors may wish to measure the effective and efficient use of their donations (Wainwright, 2003).
3.2 Rapid Modeling In this subsection we describe how we implemented the analysis model within the context of the organization. The model uses only labor force (e.g. paid, volunteer, etc.) as its resources and there are not any machine type resources. Products are all in the form of services such as volunteer programs, development projects, etc. Outputs are the lead time values for each program, which include labor utilizations and the number of programs accomplished. Using the number of programs accomplished, we can also calculate the number of customers served. Resources, services, outputs and outcomes are illustrated in Fig. 2. Currently there is no activity to measure the effects of outputs on outcomes (instant, intermediate, long term). Verification and validation of the model is done using current customer order lead times as a benchmark value (Fig. 3). Data collection and evaluation were conducted during a three-month period. We collected data on current measurement practices using structured and ad-hoc interviews. In addition, we also used other text based records such as the annual board meeting reports, documentation associated with the monitoring and evaluation of customers (i.e. alumni surveys), performance (i.e. applications numbers and volumes, etc.) and website related metrics, figures, rankings, process definitions, etc. In addition, we also examined the existing process flow and collected the related task completion times in order to use them in our capacity related calculations.
Quick Response Service
97 Outcomes (long term)
Outcomes (intermediate)
Outcomes (initial)
Outputs External Influences Services
Resources
NOT CURRENTLY MEASURED
Lead time values for each program Labour utilizations Number of customers served Number of programs/projects accomplished Volunteer Program–International Development Projects – Local Alumni Clubs – Local Awareness Project– Local
7 paid staff, full-time 6 volunteer, full-time 11 volunteer, part-time 1 building Funds, donations, program related incomes
Fig. 2 Generic program logic model for the case NPO; Adapted from Poister (2003) and Moxham and Boaden (2007)
Fig. 3 Validation of the model
98
Arda Alp et al.
3.3 Initial Findings We observe significant variation among labor utilizations. Current utilization and idle time percentages are given in Table 1. Subsequently, Table 2 shows the calculated flow time efficiency of HNPSO using current lead-time and the real service time to indicate the amount of waiting time associated with the process. We observe that none of the HNPSO’s programs achieve a benchmark value representative for the industry (Chopra, 2010). Table 1 System utilization
Table 2 System flow time efficiency
We analyze the effect of possible process time variations on flow time and utilization of all programs: how much the service times of critical processes have to be reduced in order to achieve a level of overall process utilization that guarantees reliable system performance. For instance, current service time of some internal activities can take up to ∼1−2 days with a labor utilization of 81.3%–93.7%. However, it is possible to reduce the service time of these bottleneck activities from days to hours of 480–20 minutes with a level of ∼76.3%–77% of labor utilization.
Quick Response Service
99
3.4 Improvement Possibilities In this section, we observe the possibility of several improvements in parallel – driven by communication related, capacity management related, and performance system related actions: (i) Communication related improvement and related effects: The main goals should be to make communications and collaboration secure, cheap, efficient and easier to manage. Efficiency can be increased with communications and collaboration platforms, as well as unified communications technologies. Integration of processes among several departments is inevitable. Better process and definitions (i.e. data hierarchy, proper documentation), simplified processes (i.e. decrease of unnecessary tasks), written procedures (i.e. less preparations, less mistakes) will also help. Communication improvement has a significant positive effect on the system performance. Better process definitions facilitate cross-functional communication, which lowers workloads and dramatically decreases the flow time. A lead time of a few days is achievable compared to the current situation with several months of lead time (Table 3). Table 3 Effects of communication improvements
(ii) Capacity related improvement and related effects: Improving the capacity management of the HNPSO (i.e. application of an integrated capacity management) can help the organization to compensate short-term demand fluctuations and longterm growth. Better training and an increase of a multi-skilled workforce can provide benefits such as a reduction of the total required workforce, labor cost savings and flexibility (i.e. less dependency of program managers and top management). We analyze capacity related effects under four scenarios: (1) the current situation, (2) a moderate demand increase, (3) demand increases realistic for long-term growth, and (4) demand increases for short-term fluctuations. The demand increase factors are based upon actual demand data. Table 4 shows how active capacity management can address a growth of demand while ensuring process performance. The results indicate that the current system is very sensitive to any small variation in demand. If improvement suggestions are not taken into consideration, the current system has very limited capacity to hedge any demand variation. Therefore,
100
Arda Alp et al.
Table 4 Effect of capacity management
these capacity-based recommendations can help the organization to deal with shortterm demand fluctuations and long-term growth. In this sense, if the paid workforce is increased by one, the system will be able to hedge moderate demand increases (Scenario 2). Scenario 3 reflects a realistic demand increase for long-term growth and system can handle this with one more paid employee. For the given capacity adjustments, flow time decreases for both Program 1 and 2. Scenario 4 reflects a realistic demand increase for short-term growth and system can handle this with two additional paid employees, while flow time decreases for all programs. (iii) Performance system related improvement and related effects: A performance system, which links operational, customer oriented, and financial performance measures gives management information, which is more relevant than that of a traditional performance system. Management needs better decision support and for this, a comprehensive performance measurement system is essential. Our performance related calculations consider fundamental financials such as cost of goods sold, unit payroll per person, revenues, gross margin, net income, profit and loss statements. In addition, we calculate a theoretical value for the number of lost customers. We assume that customers can be lost due to unsatisfied demand and long lead times. In this sense: (i) lost customers(demand) = actual demand – satisfied demand; (ii) lost customers(leadtime) = actual demandt−1 * proportion of lost customers. The proportion of lost customers has its maximum and minimum values between the maximum acceptable customer delivery time and the average ideal lead time (30% shorter process times with process improvements). Each lost customer represents a potential indirect loss for the HNPSO in magnitude of unit revenue per program. We consider the same scenarios and analyzed two situations for each scenario; ‘Situation A’ where we assume the HNPSO’s management does not prefer to proceed with a workforce increase; ‘Situation B’ where the HNPSO management
Quick Response Service
101
prefers to proceed with a workforce increase. In addition to results of the workforce increase on utilization and lead time values, this section illustrates the related effects selected financial measures. Current system has a negative income. Regarding a moderate demand increase, current capacity is not able to cope with this increase. This results also in lost sales. However, if capacity is increased, demand can be satisfied and this results in slightly decreased negative income. Regarding the long-term growth, current capacity is not able to cope with assumed growth and the outcome of this is lost sales which is almost 3.5 times higher than the initial lost sales value. The system can handle a demand increase with additional workforce, and this yields a positive income (relatively 78.70% of the absolute value of initial lost sales). Regarding the short-term fluctuations, on average 100% variation is likely to happen as short-term fluctuation. Currently the system is not able to handle this demand variation, which results in lost sales (almost 9 times larger than the initial value). With two additional employees, the system can handle this demand variation, resulting in a positive net income (relatively 694.55% of the absolute value of initial lost sale). The analysis of the four scenarios illustrates an approach, which integrates: (i) operational data (lead time, achievable output); (ii) customer-oriented data (lead time-dependent customers satisfaction); (iii) financial data (unit revenue, unit payroll).
4 Conclusion Current performance measurement practices of the nonprofit sector are still limited (Moxham and Boaden, 2007; Moxham, 2009a; Taylor et al, 2009). However, the findings of this study can encourage managers of VCOs and NPOs to consider rapid modeling techniques and lead time reduction strategies in their operational and strategic decisions. We observe that a performance system, which links operational, customer oriented, and financial performance measures, gives management information that is more relevant than that of a traditional performance management system. In addition, an increased emphasis on efficient capacity management should be given a very high priority. If capacity is not adjusted to cope with the increase in demand, this leads to increased lead times, decreased customer satisfaction, and, ultimately, lost sales (theoretically). In this sense, rapid modeling techniques can help the managers of NPOs and VCO’s to manage their capacity, which will gradually help them to reduce their service delivery lead times. In return, this will provide a benefit as improved client satisfaction and better economic situation (i.e. increased sales). Our approach of linking operational, customer oriented, and financial performance measures is necessary to enable NPOs and VCOs to maintain their continuous improvement (Tersine and Hummingbird, 1995; Bozzo and Hall, 1999) in pursuit of their social missions.
102
Arda Alp et al.
Considering the relatively limited operations management applications in nonprofit performance measurement systems, this paper contributes to both research and practice. The study provides a detailed examination of performance measurement at strategic and operational levels, as well as impact of lead time reduction strategies and rapid modeling techniques on an HNPSO’s organizations strategy. However, we need to expand the scope of our study in order to validate the applicability of our findings to the diverse nonprofit sector. Acknowledgements This work is supported by the SEVENTH FRAMEWORK PROGRAMME – THE PEOPLE PROGRAMME – Marie Curie Industry-Academia Partnerships and Pathways Project (No. 217891) ”Keeping jobs in Europe”.
References Banker R, Potter G, Srinivasan D (2000) An empirical investigation of an incentive plan that includes nonfinancial performance measures. The Accounting Review 75(1):65–92 Beamon B, Balcik B (2008) Performance measurement in humanitarian relief chains. International Journal of Public Sector Management 21(1):4–25 Behn R (2003) Why measure performance? Different purposes require different measures. Public Administration Review 63(5):586–606 Bielen F, Demoulin N (2007) Waiting time influence on the satisfaction-loyalty relationship in services. Managing Service Quality 17(2):174–193 Bozzo SL, Hall MH (1999) A review of evaluation resources for nonprofit organizations. Tech. rep., Canadian Centre for Philanthropy Research Chandes J, Pach´e G (2010) Investigating humanitarian logistics issues: From operations management to strategic action. Journal of Manufacturing Technology Management 21(3):320–340 Chopra S (2010) Flow time efficiencies in white collar processes. URL Available at: http://www.csun.edu/ aa2035/SOM686/powerpoints/Part(Retrieved: April, 2010) Craighead CW, Meredith J (2008) Operations management research: Evolution and alternative future paths. International Journal of Operations & Production Management 28(8):710–726 Davis M, Vollmann T (1990) A framework for relating waiting time and customer satisfaction in a service operation. Journal of Services Marketing 4(1):61–69 De Treville S, Shapiro R, Hameri A (2004) From supply chain to demand chain: The role of lead time reduction in improving demand chain performance. Journal of Operations Management 21(6):613–627 Gunn C (2004) Third-sector development: Making up for the market. Cornell University Press Heikkil¨a J (2002) From supply to demand chain management: Efficiency and customer satisfaction. Journal of Operations Management 20(6):747–767
Quick Response Service
103
Hopp W, Spearman M (2007) Factory physics: Foundations of manufacturing management. McGraw-Hill/Irwin Johri G (2009) Customer satisfaction in general insurance industry - A step towards competitiveness. Journal of Risk & Insurance Pravartak 4(3):1–9 Kelton W, Sadowski R, Sadowski D (1998) Simulation with ARENA. McGraw-Hill, USA Law A, Kelton W, Kelton W (1991) Simulation modeling and analysis. McGrawHill, New York, USA Moxham C (2009a) Performance measurement: Examining the applicability of the existing body of knowledge to nonprofit organisations. International Journal of Operations & Production Management 29(7):740–763 Moxham C (2009b) Quality or quantity? Examining the role of performance measurement in nonprofit organizations in the UK. In: Proceedings of 16th International Annual EurOMA Conference, G¨oteborg, Sweden Moxham C, Boaden R (2007) The impact of performance measurement in the voluntary sector: Identification of contextual and processual factors. International Journal of Operations & Production Management 27(8):826–845 Oloruntoba R, Gray R (2009) Customer service in emergency relief chains. International Journal of Physical Distribution & Logistics Management 39(6):486–505 Poister T (2003) Measuring performance in public and nonprofit organizations. Wiley, New York, NY Rabta B, Alp A, Reiner G (2009) Queueing networks modeling software for manufacturing. In: Reiner G (ed) Rapid Modelling for Increasing Competitiveness Tools and Mindset, Springer, London, chap 2, pp 15–23 Reiner G (2009) Preface. In: Reiner G (ed) Rapid modelling for increasing competitiveness: Tools and Mindset, Springer Publishing Company, Incorporated, London, Springer, London Schodl R (2009) The best of both worlds - Integrated application of analytic methods and simulation in supply chain management. In: Reiner G (ed) Rapid Modelling for Increasing Competitiveness - Tools and Mindset, Springer, London, chap 13, pp 155–162 Singh P (2009) Improving lead times through collaboration with supply chainpartners: Evidence From Australian Manufacturing Firms. In: Reiner G (ed) Rapid Modelling for Increasing Competitiveness - Tools and Mindset, Springer, London, chap 23, pp 293–305 Suri R (1998) Quick response manufacturing: A companywide approach to reducing lead times. Productivity Press Suri R (2002) Quick response manufacturing: A competitive strategy for the 21st century. In: Proceedings of the POLCA Implementation Workshop Taylor M, Heppinstall M, Liao M, Taylor A (2009) Performance management and funding in the Third Sector: A research agenda. In: Proceedings of 16th International Annual EurOMA Conference, G¨oteborg, Sweden Tersine R, Hummingbird E (1995) Lead-time reduction: The search for competitive advantage. International Journal of Operations and Production Management 15(2):8–18
104
Arda Alp et al.
Voss C (2005) Alternative paradigms for manufacturing strategy. International Journal of Operations and Production Management 25(12):1211–1222 Wainwright S (2003) Measuring impact: A guide to resources. Tech. rep., NCVO Publications, London
Applying Operations Management Principles on Optimisation of Scientific Computing Clusters Ari-Pekka Hameri and Tapio Niemi
Abstract We apply operations management principles on production scheduling and allocation to computing clusters and their storage resources to increase throughput and reduce lead time of scientific computing jobs. In addition, we study how this approach affects the amount of energy consumed by a computing job comprised of hundreds of calculation tasks. Methodologically we use the design science approach by applying domain knowledge of operations management and efficient resource allocation on the efficient management of the computing resources. Using a test cluster we collected data on CPU and memory utilisation along with energy consumption on different ways of allocating the jobs. We challenge the traditional one job per one processor core method of scheduling scientific clusters with parallel processing and bottleneck management. We observed that by increasing the utilisation rate of the cluster memory increases throughput and decreases energy consumption. We studied also scheduling methods running multiple tasks per CPU core and scheduling based on the amount of free memory available. The test results showed that, at best these methods both decreased energy consumption down to 45% and increased throughput up to 100% compared to the standard practices used in scientific computing. The results are being further tested to eventually support LHC computing of CERN.
1 Introduction Scientific computing clusters are widely used in many research disciplines, especially in experimental physics, astronomy and bio sciences. Computing intensive research deploys easily thousands, even hundreds of thousands of CPUs to analyse Ari-Pekka Hameri (B) HEC, University of Lausanne, CH-1015 Lausanne, Switzerland, e-mail:
[email protected] Tapio Niemi Helsinki Institute of Physics, CERN, CH-1211 Geneva, Switzerland, e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 8,
105
106
Ari-Pekka Hameri and Tapio Niemi
various data sets and models. These clusters can be viewed as production resources processing jobs consisting of numerous tasks. The tasks can be processed by different resources and finally the jobs are assembled together to be delivered back to the customers of the cluster. Jobs are in the backlog of the cluster and they may have severity priorities, deadline constraints and each production resource has its capacity and utilisation levels. Operating such a cluster has its own cost structure related to capital invested, energy consumed, maintenance work, and facility related costs. In all, a computing cluster resembles closely an industrial production unit, thus the working hypothesis for this paper is to apply operations management principles to improve computing cluster productivity, overall efficiency, and customer satisfaction. We set to study this opportunity by using real experimental physics data, computing jobs and dedicated computing cluster. As CERN, the European Organization of Nuclear Research at Geneva, provides us with a unique possibility to experiment and test the hypothesis. Once fully operational, CERN and its Large Hadron Collider (LHC) experiment will produce about 15 petabytes of data in a year. One copy of this data is stored at CERN (so called Tier-0 site) and another copy of the data is distributed to 11 Tier-1 sites around the world. From Tier-1 sites the data is further forwarded to Tier-2 sites where simulations and user specific analyses based on the experimental data are performed. The overall computing infrastructure comprises numerous computing clusters of alternating size, yet the total amount of CPUs is well over 100 000. Efficient management of these computing resources is vital for the success of the project, which is foreseen to be active the next 20 years. Our approach is based on operations management and the used research approach follows that of design science by applying the domain knowledge from operations management to efficient management of the computing resources. Past research in the field of computing system efficiency has mostly focused on hardware and infrastructure aspects, e.g. the development of more efficient hardware or optimising cooling of computer centres. The theoretical background of our research comes from production optimisation research. Production engineers have always been searching for ways to increase throughput of a facility with limited capacity. The principles of production dictate several challenges to them, like the law of capacity dictating that in a steady state, all plants will release work at an average rate that is strictly less than average capacity. This means that the organization and allocation of work in production system affects the eventual performance of the system. Lead time, the time designated for a job to traverse a designated portion of the production process, is vital for customer satisfaction and competitiveness of the system. Throughput, lead time and cycle time, the average time between the release and completion of a job along a given routing, are all vital characteristics when assessing the efficiency of the system. Adding the recycling and energy consumption of the system, we have all critical parameters of any production and value adding system. In IT systems, around 50% of the energy consumption comes from cooling. Therefore reducing the electricity consumption of a computational task by n units actually, reduces the total energy consumption twice the amount. When applying new optimising methods to large scale computing resources, the results of the
Operations Management on Optimisation of Scientific Computing Clusters
107
project can bring remarkable savings, especially in large computing installations like the one used by CERN where the total energy consumption is several megawatt hours a year. The results of our earlier work have shown that optimising the configuration of workload management system can both decrease energy consumption and increase the total efficiency. The measured improvements so far have been increasing throughput up to 100% and decreasing energy consumption down to 40-50% compared to the standard practice in the LHC computing. Generally, value adding production systems with high throughput and short lead times have been proven to generate other benefits than the pure output performance. Statistically, these systems are also producing better quality and less waste, thus having an overall better environmental efficiency. Also systems which are operationally better performing have more satisfied customers and tend to be more competitive in the market. Based on this, the optimized computing system will, in addition to reduced electricity consumption, also work more reliably and offer more computing power. In the following, we briefly describe earlier research in the field, and then we detail our methodological approach and the research hypotheses together with our test setting. The results on using different loading principles to run the jobs through the test cluster are discussed and compared before final conclusions are drawn.
2 Related Work On the operations management point of view the mathematical link between throughput or operational speed and inventory level was originally demonstrated by Little (1961) who showed that speed is directly proportional to average inventory. This means that inventory speed will increase as inventory reductions are obtained and the throughput rate remains constant. Scale and cost-centric manufacturing dominated operations management research until the 1970s when the quality movement turned the focus to continuous improvement and errorless operations. This meant that ways of reducing inventory would improve the lead time and punctuality of the system. Numerous scheduling techniques were developed to make production planning easier, as the fundamental sequencing problem leads to polynomial growth of alternative solutions. Tackling this challenge lead to the simplification of production facilities and to the birth of the Just-in-Time principles, which have been widely adopted by assembly industries ever since. Further, Goldratt and Cox (1984) came out with their optimised production technology that focused on bottlenecks and the throughput of the system. Hopp and Spearman (1996) compiled the key set of the mathematical principles determining lead time and its implication to the performance of the production facility. To improve computing centre efficiency has attained a fashionable label, the so called green computing. It is a wide topic incorporating issues like data centre location near cheap energy sources (Brown and Reams, 2010), minimising so called e-waste (Hanselman and Pegah, 2007), designing optimal cooling infrastructure
108
Ari-Pekka Hameri and Tapio Niemi
and running the centre in an optimal way (Marwah et al, 2009). Generally we can say that energy and resource optimisation has mostly focused on hardware and infrastructure issues, not so much on operational methods such as workload management and even less on operating system or application software optimisation for energy-efficiency. We first present methods focusing on the whole data centre or at least the cluster level. For example, Lefurgy et al (2007) suggested a method to control peak power consumption of servers. The method is based on power measurement information on each computing server. Controlling peak power makes it possible to use smaller and more cost- and energy-effective power supplies. Poweraware schedulers can be seen belonging to the same category. For example, Rajan and Yu (2008) and Mukherjee et al (2009) have studied this topic. Scheduling is a widely studied topic and most of the work focuses on finding optimal schedules when jobs have preconditioned constraints and/or strict time limitations. Usually this work has been related to high-performance computing in which the aim is to optimise processing time of individual computing jobs. These jobs can have strict deadlines or require massive parallelism. Our focus area is closer to so called high-throughput computing in which individual tasks are not very time critical and the aim is to optimise the total throughput of the system over a longer period of time. This has received less research interest so far. Actually some studies suggest clearly opposite approaches, e.g. Koole and Righter (2008) suggest a scheduling model in which tasks are replicated to several computers. There are some studies on energy-aware scheduling, like Bunde (2006), who has studied power aware scheduling methods for minimising energy consumption and not reducing system performance by applying dynamic voltage scaling technologies. Goes et al (2005) have studied scheduling of irregular I/O intensive parallel jobs. They note that CPU load alone is not enough but all other system resources (memory, network, storage) must be taken into account in scheduling decisions. Santos-Neto et al (2004) studied scheduling in case of data-intensive data mining applications. Wang et al (2009) have studied optimal scheduling methods in a case of identical jobs and different computers. They aim to maximise the throughput and minimise the total load. They give an on-line algorithm to solve the problem. The second group of studies focus on the server level. Venkatachalam and Franz (2005) gave a detailed overview on techniques that can be used to reduce energy consumption of computer systems. Li et al (2005) studied performance guaranteed control algorithms for energy management of disk and main memory. Ge et al (2005) studied methods based on dynamic voltage scaling technology of microprocessors and created a software framework to implement and evaluate their method.
3 Methodology and Research Problem Our research approach follows that of design science, which according to Hevner et al (2004) must produce a viable artifact in the form of a construct, a model, a method, or an instantiation. We follow Simon (1973) definition of design science
Operations Management on Optimisation of Scientific Computing Clusters
109
that emphasises the process of exploration through design: design science is research that seeks (i) to explore new solution alternatives to solve problems, (ii) to explain this explorative process, and (iii) to improve the problem-solving process. Following Holmstrom et al (2009) this research aims to complete all four steps of design science approach. 1. Solution incubation: development of an initial solution design. Our earlier research has shown that better allocation of computing tasks produces better throughput and energy-efficiency. By using the existing theory in different domain, namely production and operations management, the initial working solution was established and tested. This needs to be refined and tested in larger scale, with much larger load and computing resources. 2. Solution refinement: solving the problem. This step forms the main part of this research project. The intended consequences of the solution need to be confirmed with a real production environment. The obtained results need to be verified and tested in different configurations. 3. Explanation I: Development of substantive theory and establish theoretical relevance. The theoretical implications of the refined solution design are analysed and documented. Their impact on to exiting body-of-knowledge is to be discussed. 4. Explanation II: Development of formal theory, i.e. strengthen theoretical and statistical generalisability. This will be covered in our future work. We aim to complete theoretical and empirical examination of relevant contingencies and develop a formal representation of the solution design. If possible we aim also to refine the solution design in multiple contexts, e.g. for other computational problems. The research problem for our study questions how it is possible to improve throughput and energy-efficiency by scheduling the jobs based on their estimated properties and loads of all components (CPU, memory and network) of computing nodes. Further on we seek ways on how to schedule task and jobs in order to maximise throughput and minimise energy-consumption in computing clusters (see Figure 1). The main difference of this setting when compared to manufacturing systems is that in a computer it is possible to run several tasks in parallel, and that tasks are not dedicated to any specific resources. Reasonable overloading can improve throughput but at some point it causes task processing to slow down. The total amount of tasks is also limited by the amount of memory. Another difference with manufacturing is that it is difficult to see what is going on inside the computer, thus all measures are based on in- and output of the system, as actual work in progress takes place literally in a black-box. Our work concentrates on studying, developing, and finally testing different scheduling methods for high energy physics jobs in a computer cluster. Further development of the traditional methods, or configuration and tuning of existing ones will be based on results by the analysis of log data on workload management systems, existing literature, interviewing operators, and own test results. Statistical methods are used to analyse existing data from computing clusters to find out typical
110
Ari-Pekka Hameri and Tapio Niemi
Fig. 1 Scheduling system
use scenarios and properties of computing jobs such as memory requirements and running times. We used the log files of a Finnish LHC computing facility forming part of the international grid computing infrastructure. The data contain all relevant information on submitted computing jobs during January 2010. This data was used both estimating properties of future jobs and developing realistic test scenarios. The data sets used were mainly related to CMS and Atlas experiments at CERN. In all the data set contained data from around 50 000 jobs. The cluster used for collecting data contained 128 nodes each having two dual-core processors, making the total amount of cores to 512. Before moving on to actual tests and results we briefly analyse 1) what are the properties of average high energy physics (HEP) analysis jobs, 2) what is the utilisation rate of the resources, and 3) whether previously ran jobs can be used to predict properties of future jobs. First we measured memory and I/O utilisation of an individual analysis HEP job. These data intensive analysis jobs read the data from a disk or from a file server through the network in order to perform the analysis for the data. The I/O traffic is relatively high all the time but the memory usage stays low until enough data have been retrieved (see Figures 2 and 3). For scheduling this means that the memory need of a computing node can dramatically increase although no new tasks have been started.
Fig. 2 I/O usage of a HEP analysis job
Fig. 3 Memory usage of a HEP analysis job
Operations Management on Optimisation of Scientific Computing Clusters
111
When analysing the log data, we noticed that lead times of jobs are 15 to 50% longer than the actual CPU times. Since the cluster configuration was set to process one job per CPU core, this means that there is a bottleneck slowing down the processing. The most obvious bottleneck is I/O access to the disk and network. When estimating memory utilisation of jobs compared to their CPU utilisation, the memory utilisation rate was about a half from the CPU utilisation rate. One reason for this is an irregular memory utilisation curve of physics jobs. We assume that the other reason is related to I/O waiting times. Based on characteristics on CPU and memory utilisation, we analysed how many jobs were run in parallel in the cluster. The maximum number of jobs running was 481 during the test period (Figure 4). The y axis of the histogram indicates the number of jobs run with the number of simultaneous jobs indicated in x axis. Most of the jobs run simultaneously with 100-150 other running jobs. Since the utilisation rate of the memory is less than half of the utilisation of CPU, we could process more jobs in parallel. Figure 5 shows how many jobs based on our data could have been processed in parallel in a computing node having 8 gigabytes of memory.
Fig. 4 Simultaneously running jobs in the cluster during the time of our sample
Fig. 5 Hypothetical maximum number of possible jobs in a 8GB node
The processing times and memory utilisation of jobs are illustrated in Figures 6 and 7. Variation in these is significant. Since the data does not show any clear pattern, we tested running mean as an estimate for these values for the next job. We used average square error to evaluate different models. In our case small errors are not essential, but it is important to avoid large errors. The best estimate for both CPU time and total memory usage was given by running mean of past 10 values, while for the maximum memory the best was the average of 14 previous values. For comparison, we computed square errors using a random guess (white noise with the same mean and deviation as the real data), the mean of the data, and the previous value. Mean square errors for these were clearly larger than using the running mean.
112
Fig. 6 Maximum memory usage
Ari-Pekka Hameri and Tapio Niemi
Fig. 7 CPU usage
4 Tests and Test Results Our test method was to run similar sets of test applications by using different scheduling methods. We used three different test jobs: an I/O intensive data analysis job, a CPU intensive physics simulation job, and a mixed job that contained both I/O and CPU intensive tasks. Each test run contained 50 runs of the same job to reduce variance. We measured the run time and electricity consumption of each test run and calculated throughput and electricity consumption per job based on the measured values. Finally each of these 50-run groups were processed five times and the final numbers are an average of these results. The test computer was similar as the ones used in production clusters. It has two Intel Xeon E5520 quad core 2.27 GHz processors, 16 GB memory, and Linux operating system. The workload management system was Sun Grid Engine (SGE) (Sun Microsystems, 2008). In scientific computing clusters, so called ”single job (task) per core” -scheduling method is practically the default setting. It is also the default setting of the SGE workload management system. It simply means that each CPU core can run maximum one job. Usually the jobs are distributed equally to all computing nodes of the cluster. This inflexible scheduling method is at its best when jobs are pure CPU or memory jobs, but with I/O intensive applications, I/O traffic becomes a bottleneck that makes the processor waiting for I/O operations causing CPU utilisation dropping down from 100%. To improve efficiency we tested three different methods to remove the bottleneck of I/O traffic. The first of the methods is to run more than one task per CPU core, i.e. applying a multi-tasking approach. Improvements depend heavily on the application. With CPU intensive simulations or mixed workload, the 2 tasks per core setting improved throughput compared to the 1 task/CPU setting 40% and energyefficiency 18%. While in the case of data analysis jobs, the improvements were 9% in throughput and 8% in energy-efficiency. Running more tasks per core does not affect much the numbers of mixed or CPU intensive workload but improves remark-
Operations Management on Optimisation of Scientific Computing Clusters
113
ably the efficiency of the I/O workload: with 4 tasks per core throughput increased 41% and energy consumption decreased 25%. This validates our assumption that I/O access is a bottleneck. A challenge with the fixed scheduling is to determine the number of tasks to be run per core, since we have no information how much resources, e.g. memory, jobs need. In high throughput computing CPU power is not usually the limiting factor and running (too) many simultaneous jobs just slows down the processing time of an individual job but it does not have a dramatic effect on throughput. Instead, memory or I/O access are usually bottlenecks and overloading them has a negative effect on performance. For example, allocating too much memory dramatically slows down the system because of the need to use swap memory, or even cause a system crash. Since flexible resource allocation has been shown to be more efficient and our earlier work shows increasing memory utilisation improving efficiency (Niemi et al, 2009), we developed a scheduling method based on the amount of the free memory in a computing node. The method is more complex than the fixed number of tasks per CPU core scheduling but it is also much more flexible, since the memory consumption of jobs does not need to be known in advance. In the memory-based scheduling, a new job is submitted to the computing node, if the memory utilisation level is lower than a given memory threshold. We determined the memory threshold as follows: the memory threshold = the total amount of memory - operating system memory requirements - the estimated memory usage of the next task. We used a moving average method to estimate the memory requirement of the next task. Implementing the memory-based scheduling method would be easy if each job allocated all its memory as the first task after it has started. Since this is not the case, workload management systems use the load adjustment method to deal with the problem: the scheduler reserves some fixed amount of memory for a fixed period of time to compensate the amount of memory that the just started job will allocate later. If no adjustments were used, the scheduler would schedule an almost infinite number of tasks to start immediately, causing the system to crash. Assuming that all jobs are similar and their memory usage is now in advance, this system works well. However, it can easily reserve too much memory in the beginning of processing and it cannot take into account complex memory profiles. In our tests, the fixed memory-based scheduling gave similar results than running 4 jobs per core in cases of mixed and CPU intensive workloads but with I/O intensive data analysis it was clearly better: throughput was 27% better and energy consumption 17% lower. As mentioned above, it is difficult to define the exactly right value for the load adjustment. If the load adjustment value is too big, some amount of memory remains unused all the time because of the cumulative effect of adjustments. In the opposite case, the system can become overloaded. The fixed load threshold can also make the CPU load and memory utilisation fluctuate, heavily reducing efficiency. To alleviate this problem, we developed an additional mechanism based on the fuzzy logic: a fuzzy controller tunes the load threshold value based on the actual memory utilisation and its change. The fuzzy control also stabilises the memory usage level as we can see in Figure 4. The fuzzy method is based on a set
114
Ari-Pekka Hameri and Tapio Niemi
Fig. 8 Memory usage of fixed load threshold and fuzzy scheduling
of simple fuzzy rules that are given in a form IF x AND y THEN z. For example: IF Memory is Positive AND Memory change is Positive THEN Control is Negative. When comparing fuzzy scheduling to fixed memory-based scheduling, fuzzy scheduling gave similar results for both in throughput and energy-efficiency than fixed memory-based scheduling in a case of mixed workload and it was around 1% better in CPU intensive workload. In the case of I/O intensive data analysis fuzzy scheduling worked better. It outperformed the fixed memory-based scheduling method by 17% in throughput and 11% in energy-efficiency. A summary of test results is given in Figure 9 that shows percentagel improvements compared to the 1 tasks per CPU core case. The first group of columns comes from the mixed jobs containing both I/O intensive data analysis and CPU intensive simulation jobs. As it can be seen, all three scheduling methods gave similar improvements, around 40% in throughput and 20% in energy-efficiency, compared to the 1 task per core setting. We assume that adding CPU intensive tasks stabilises resource utilisation, since CPU power can be focused on them while I/O tasks must wait for new data from the disk. In the middle columns, the I/O intensive data analysis, memory-based scheduling methods, especially the fuzzy scheduling method, gave clearly better results than 2 or 4 tasks per core scheduling. One reason is that the fuzzy method processed even more than 4 parallel tasks per core but we also assume that more stable resource utilisation helped reducing the I/O bottleneck. The last columns represent the pure CPU intensive workload. The results are very similar to the mixed workload. This follows from two issues: 1) the processing time of an I/O intensive job is only around 25% of the processing time of a CPU job, and 2) in the CPU intensive workload there is not a clear bottleneck to remove as there is in the case of I/O intensive tasks.
Operations Management on Optimisation of Scientific Computing Clusters
115
Fig. 9 Improvements of 2, 4, fixed memory, and fuzzy memory threshold scheduling methods compared to the 1 task / CPU setting
5 Conclusions Since the conventional one task per processor core scheduling practice seems to under utilise memory, we studied different scheduling methods and noticed that memory usage -based scheduling gives better results. However, it is difficult to find a fixed memory threshold giving the optimum results. Therefore, we developed a fuzzy logic -based algorithm measuring memory consumption and changes in it. By applying operation management principles on capacity utilisation and bottleneck planning we devised simple fuzzy rules than manage to keep the system load more stable and also achieved better efficiency than using a fix threshold or fix amount of tasks per CPU core. Our tests showed that optimising the configuration of the workload management system can both decrease energy consumption down to 45% and increase throughput up to 100% compared to the current standard practices in scientific computing. The results are also in line with Schmenner (2010), who proposed that companies having emphasised flow, implying a focus on speed and variability reduction, would outperform companies emphasizing other goals. The principle emphasizes focus on value adding tasks and removing of non-value adding tasks, while at the same time trying to eliminate bottlenecks in order to introduce even flow and short lead time. The main application area of these research results is scientific computing, especially grid computing related to the LHC experiment of CERN. Later, the results can also be applied to other areas such as computer clusters running web services. It has been estimated that the electricity cost can easily exceed the hardware cost during the lifetime of the computing system. In computing intensive science energyefficient solutions are important since these computing systems process millions of jobs each of which consist of thousands of individual computing tasks. The results of our research are implemented as software tools which can directly be applied to LHC computing clusters.
116
Ari-Pekka Hameri and Tapio Niemi
References Brown D, Reams C (2010) Toward energy-efficient computing. Queue 8(2):30–43 Bunde D (2006) Power-aware scheduling for makespan and flow. In: SPAA’06: Proceedings of the 18th annual ACM symposium on Parallelism in algorithms and architectures, ACM, New York, NY, USA, pp 190–196 Ge R, Feng X, Cameron K (2005) Performance-constrained distributed dvs scheduling for scientific applications on power-aware clusters. In: SC ’05: Proceedings of the 2005 ACM/IEEE conference on Supercomputing, IEEE Computer Society, Washington, DC, USA, p 34 Goes L, Guerra P, Coutinho B, Rocha L, Meira W, Ferreira R, Guedes D, Cirne W (2005) AnthillSched: A scheduling strategy for irregular and iterative I/Ointensive parallel jobs. In: Job Scheduling Strategies for Parallel Processing, Springer, pp 108–122 Goldratt EM, Cox J (1984) The Goal. North River Press, Croton-on-Hudson Hanselman S, Pegah M (2007) The wild wild waste: e-waste. In: Proceedings of the 35th annual ACM SIGUCCS conference on user services, ACM, New York, NY, USA, pp 157–162 Hevner A, March S, Park J, Ram S (2004) Design science in information systems research. MIS Quarterly 28(1):75–105 Holmstrom J, Ketokivi M, Hameri A (2009) Operations management as a problemsolving discipline: A design science approach. Decision Sciences 40(1):65–87 Hopp W, Spearman M (1996) Factory physics. Irwin, Chicago Koole G, Righter R (2008) Resource allocation in grid computing. Journal of Scheduling 11(3):163–173 Lefurgy C, Wang X, Ware M (2007) Server-level power control. In: ICAC’07: Proceedings of the Fourth International Conference on Autonomic Computing, Washington, DC, USA, IEEE Computer Society, pp 4–4 Li X, Li Z, Zhou Y, Adve S (2005) Performance directed energy management for main memory and disks. Transactions on Storage 1(3):346–380 Little J (1961) A proof for the queuing formula: L= λ w. Operations Research 9(3):383–387 Marwah M, Sharma R, Shih R, Patel C, Bhatia V, Mekanapurath M, Velumani R, Velayudhan S (2009) Data analysis, visualization and knowledge discovery in sustainable data centers. In: Proceedings of the 2nd Bangalore Annual Compute Conference, ACM, New York, NY, USA, ACM, pp 1–8 Mukherjee T, Banerjee A, Varsamopoulos G, Gupta S, Rungta S (2009) Spatiotemporal thermal-aware job scheduling to minimize energy consumption in virtualized heterogeneous data centers. Computer Networks 53(17):2888–2904 Niemi T, Kommeri J, Ari-Pekka H (2009) Energy-efficient scheduling of grid computing clusters. In: Proceedings of the 17th Annual International Conference on Advanced Computing and Communications (ADCOM 2009), Bengaluru, India Rajan D, Yu P (2008) Temperature-aware scheduling: When is system-throttling good enough? In: WAIM ’08: Proceedings of the 2008 The Ninth Interna-
Operations Management on Optimisation of Scientific Computing Clusters
117
tional Conference on Web-Age Information Management, IEEE Computer Society,Washington, DC, USA, pp 397–404 Santos-Neto E, Cirne W, Brasileiro F, Lima A (2004) Exploiting replication and data reuse to efficiently schedule data-intensive applications on grids. In: The 10th Workshop on Job Scheduling Strategies for Parallel Processing, Springer, pp 210–232 Schmenner R (2010) Looking ahead by looking back: Swift, even flow in the history of manufacturing. Production and Operations Management 10(1):87–96 Simon H (1973) Does scientific discovery have a logic? Philosophy of Science 40(4):471–480 Sun Microsystems (2008) Beginner’s Guide To Suntm Grid Engine 6.2 Installation And Configuration. Tech. rep., Sun Microsystems Venkatachalam V, Franz M (2005) Power reduction techniques for microprocessor systems. ACM Computing Surveys 37(3):195–237 Wang C, Huang X, Hsu C (2009) Bi-objective optimization: An online algorithm for job assignment. GPC 2009, Geneva, Switzerland
Increasing Customer Satisfaction in Queuing Systems with Rapid Modelling No´emi Kall´o and Tam´as Koltai
Abstract Companies have to increase their customers’ satisfaction to keep their competitiveness. In services, waiting has great impact on service level and customer satisfaction. Consequently, in time-based competition, one of the main objectives of service companies is to minimize customer waiting. Waiting can be defined in several ways; however, the ultimate management objective should be the maximization of customer satisfaction. The paper shows how customer satisfaction can be approximated with utility functions and establishes a theoretical background for utility transformation of waiting time. The case study of the checkout system of a real do-it-yourself superstore is used to illustrate the application of the suggested method. The results show that utility related objective function may justify queuing system changes even if the average waiting time does not improve.
1 Introduction Queuing theory is frequently criticised for not being appropriate to solve practical problems because of its applied simplifications (Bhat, 1969). This is a widespread opinion, although the first results of queuing theory were all answers to practical problems (Bhat, 2008). These results were related to the operation of telephone switching systems. The work of Engset at the Telegrafverket and of Erlang at the
No´emi Kall´o (B) and Tam´as Koltai Department of Management and Corporate Economics, Budapest University of Technology and Economics, 1111 Budapest, Muegyetem rkp. 9. T. e´ p. IV. em., Hungary, e-mail:
[email protected] Tam´as Koltai e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 9,
119
120
No´emi Kall´o and Tam´as Koltai
Copenhagen Telephone Company dealt only with practical, operational problems (Stordahl, 2007). Nowadays, managers generally use simulation to analyze the operation of production systems (Suri, 2009). In spite of this, the benefits of queuing theory cannot be denied. The application of the analytical models needs only little data and provides quick results. These results, unfortunately in most cases, are only approximations – as the analytical models cannot take into consideration several aspects of real operation. Moreover, only a few measures can be calculated by queuing theory. These measures are generally parameters of descriptive statistics: means and standard deviations. In the different fields of economics and management science, these values, however, are sufficient for the approximate models or heuristics which provide practically relevant information. One of these models is the mean-variance (or the twomoment) decision model which can be used as a simplification of the expected utility model. With the help of this model, utility maximization can be simplified by approximating expected utilities with functions of the means and variances of the decision variables. As time can be considered as a resource and it can be spent waiting, some (negative) utility can be assigned to waits. This decrease of utility caused by waiting corresponds to the effect of waits on satisfaction. Using the two-moment decision model, to approximate customer dissatisfaction caused by waiting, only the means and variances of waiting times are needed. Therefore, rapid modelling based on the formulae of queuing theory gives sufficient information to approximate customer satisfaction in different queuing systems. The structure of this paper is the following. Section 2 shows how customer satisfaction can be approximated with utility functions and establishes a theoretical framework for utility transformation of waiting time. In Section 3, a case study of the checkout system of a do-it-yourself superstore is presented to illustrate the application of the suggested method. Finally, Section 4 summarizes the main conclusions.
2 Customer Satisfaction as a Function of Waiting Time In time-based competition, one of the main objectives of service companies is to minimize customer waiting. Waiting, however, can be defined in several ways. Different customer groups can have different average waiting times. Besides the mean, the standard deviation or the distribution of waiting times are also important characteristics of the waiting process. Moreover, the actual and perceived length of waits can also be differentiated. Independently of how waiting is defined and measured, the main objective of operation optimization is to maximize customer satisfaction, that is, to minimize customer dissatisfaction related to waiting times. Time can be considered as a resource. As such, a resource, like money, time can be gained and can be lost, that is, can be saved and can be wasted. In this
Increasing Customer Satisfaction in Queuing Systems with Rapid Modelling
121
sense, waiting time is a kind of loss. The loss represented by waiting time (that is, the decrease of satisfaction) is a subjective value. A formal technique to quantify subjective factors is the application of utility functions (Keeney and Raiffa, 1993). A utility function describing the relationship between waiting and satisfaction has to possess the following characteristics: • Utility functions are generally exponential (Keeney and Raiffa, 1993). • People are generally risk averse in terms of time (Leclerc et al, 1995). Therefore, to describe satisfaction, negative exponential utility functions should be used. • Satisfaction is a function of the difference between the actual and expected performance (Gr¨onroos, 2001). Consequently, the negative exponential utility function should determine satisfaction as the difference of expected and actual waiting times. Based on these assumptions and the results in the related scientific literature (see for example Kumar et al, 1997), the following function can be used to determine the relationship between waiting time and customer satisfaction, γ
E [U (W, T0 )] = E[−AT0 e−r(T0 −W ) ],
(1)
where A, γ and r are positive values. A expresses the assumed worth of customers’ time. Parameter γ describes the direct effect of the expected waiting time (T0 ) on satisfaction. Parameter r denotes the extent of customers’ risk averseness in terms of time. W is a stochastic variable describing actual waiting time of a customer. Notations used in the paper are summarised in Table 1. Table 1 Notations Parameter A γ T0 r W λe t S S0 S1 C E R L s0 i pi s0E rE s0R rR
Explanation assumed worth of customers’ time direct effect of the expected waiting time on satisfaction expected waiting time extent of customers’ risk averseness in terms of time actual waiting time (stochastic variable) parameter of the exponential distribution average waiting time average satisfaction initial satisfaction variable satisfaction number of all checkouts number of express checkouts number of regular checkouts limit value functional satisfaction level the number of items bought the probability of buying i items functional satisfaction level in the express lines risk averseness in the express lines functional satisfaction level in the regular lines risk averseness in the regular lines
122
No´emi Kall´o and Tam´as Koltai
Equation (1) can be used to transform a certain waiting time to the corresponding satisfaction level. Knowing these values, the average satisfaction can be calculated. That is, the waiting time of each customer has to be known (measured or simulated), which makes the calculation of customer satisfaction difficult. The calculation of the average utility (average satisfaction), however, can be simplified if the distribution function of the waiting time is known. Equation (1) contains only one stochastic variable, the waiting time (W ). Consequently, the calculation of the mean can be simplified as follows, γ
γ
E [U (W, T0 )] = −AT0 E[e−r(T0 −W ) ] = −AT0 E[e−rT0 erW ] γ
= −AT0 e−rT0 E[erW ].
(2)
To get the average satisfaction level for a stochastic variable (W ), an exponential function (erW ) has to be determined and the mean of this function has to be calculated. The calculation of this mean for the most frequently used distribution functions can be found in the literature (e.g. Keeney and Raiffa, 1993). In many queuing systems, the arrival process of the customers can be described with Poisson distribution – as, according to the Palm-Khintchine theorem, the superposition of many independent and properly normalized renewal processes forms a Poisson process (Kleinrock, 1975). If the arrival process can be described with Poisson distribution, and there is a high traffic, the distribution of waiting times is approximately exponential (Kimura, 1983). The mean of the exponential transformation (e−rX ) of a stochastic variable with exponential distribution (λe e−λe x ) can be calculated according to the following formula (Keeney and Raiffa, 1993), −λe x E e−rX X=λe e−λe x = E e(−rλe e ) =
λe . λe + r
(3)
Equation (3) contains two parameters. Parameter r expresses the extent of customers’ risk averseness in terms of time. λe is the parameter of the exponential distribution of the stochastic variable (that is, the reciprocal of its mean). Consequently, (3) can be written as follows, 1 1 1 = 1/t = 1 + rt. E erW = E −rW = (4) −rW e E[e ] 1/t+r
Knowing the mean of the exponential transformation of the waiting time, a function to determine the average customer satisfaction (the mean utility) as a function of waiting time can be formulated, γ
γ
γ
S = E [U (W, T0 )] = −AT0 e−rT0 (1 + rtx ) = −AT0 e−rT0 − AT0 e−rT0 rt.
(5)
According to (5), the satisfaction level has two parts. The first part is independent of the waiting time. It can be interpreted as the initial satisfaction level (S0 ) and mainly depends on the expectations of customers. The second part depends on the waiting
Increasing Customer Satisfaction in Queuing Systems with Rapid Modelling
123
time (S1 ) – longer waits cause lower satisfactions. From (5), it can also be seen that the extent of risk averseness connects the initial and the variable satisfaction levels (S1 = rS0 ). With these two parts of satisfaction, the satisfaction function can be simplified in the following way, S = E [U (W, T0 )] = S0 + S1 t.
(6)
Equation (6) can be used to determine the average satisfaction of different customer groups. The mean satisfaction level of all customers can easily be calculated as a weighted average of the mean satisfaction level of customers in the express and in the regular lines. Equation (6) shows that to determine the average satisfaction of customers only the descriptive statistics of the waiting time are needed. These values can be calculated with analytical models. That is, to analyze customer satisfaction, the analytical models give sufficient information; there is no need for complex simulation models. The next section shows how customer satisfaction may change as a consequence of the application of express checkouts in a do-it-yourself superstore.
3 Case Study to Illustrate the Maximization of Customer Satisfaction Related to Waiting 3.1 Description of the System and Preliminary Results The application of express checkouts is a frequently used tool for waiting time reduction. When express lines are applied, two customer groups are created. Customers buying more items than a certain amount have to use the regular checkouts. Only people buying no more than this quantity can join the express lines. The number of items that controls line-type selection is called limit value. Our former analyses revealed that one of the main parameters which influences waiting time and which can be easily controlled by the management is the limit value (Koltai et al, 2008). In this section, the real data of a do-it-yourself superstore has been used (Koltai et al, 2008). In this store, generally five checkouts operate. Using the data provided by the checkout information system, the arrival rates for the different days and for the different parts of the days has been estimated. For all periods the Poisson arrival process has been acceptable according to Kolmogorov-Smirnov tests. Based on R´enyi’s limiting distribution theorem and its generalizations, the distribution function of the time interval between two consecutive events remains the same after a rarefaction and coordinate transformation (R´enyi, 1956; Sz´antai, 1971a,b). That is, the arrival processes of the two customer groups can also be approximated with Poisson processes, the distribution of the interarrival times can be considered exponential. The density function of the number of items bought by customers has also been provided by the checkout information system, and for describing it a truncated
124
No´emi Kall´o and Tam´as Koltai
geometric distribution with a mean of 3.089 has been found acceptable by a chisquare test. The service time of customers, as it could not be obtained from any information systems, was measured manually. The linear relationship between the number of items bought and the service time was tested with regression analysis. The assumption of linearity was accepted based on a 0.777 correlation coefficient. According to the results of linear regression, service time has two parts. The first part (constant) is independent of the number of items bought and is equal to 0.5463 minute. The second part (slope) is proportional with the number of items bought and its unit value is equal to 0.1622 minute. Linear regression can be applied to determine the standard deviation of these parameters and the service times of customers buying different amounts as well (Koltai et al, 2008). Results presented in this paper are valid for a midday traffic intensity with an arrival rate most characteristic for the store (λ = 95 customers/hour). According to the geometric distribution, customers buy generally only few items. Therefore, two of the 5 working checkouts was considered to be express servers (C = 5, E = 2). For analyzing the waiting times when different limit values are applied, a numerical Excel model has been developed (Koltai et al, 2008). This model calculates the special characteristics of the express line system (of the express and regular checkouts) with different limit values based on the main parameter of the original queuing system (system without express checkouts). With these parameters, using the formulae of analytical models, an upper and a lower estimation can be given for the average waiting times. Based on the estimations of the waiting times of customers in express and in regular waiting lines, the average waiting time of all customers can be calculated. In Table 2, the average waiting times in the different queue-types can be seen. These results are in accordance with the basic assumptions: small limit values reduce waiting in the express lines and increase waiting in the regular lines; large limit values have inverse effects. Based on these different effects, the average waiting times in all lines as a function of the limit value can be described with a U-shaped curve. Table 2 Average waiting times in queue Average waiting time in queue Express lines Regular lines All lines
Limit value L=1 0.0121 0.0663 0.0488
L=2 0.0487 0.0330 0.0415
L=3 0.1102 0.0160 0.0811
L=4 0.1957 0.0075 0.1563
With different limit values, different waiting times can be achieved. An important management objective related to express line systems is to determine the limit value which optimizes the operation, minimizes waiting times, and maximizes customer satisfaction. The minimal average waiting time in all lines, that is, the highest service
Increasing Customer Satisfaction in Queuing Systems with Rapid Modelling
125
level can be achieved by using 2 items as limit value (L = 2) – as it can be seen in Table 2. Service level, even if it is denoted as a function of waiting, can be defined in several ways. However, independently of the way how waiting is defined and measured, the main objective of operation optimization is to maximize customer satisfaction, that is, to minimize customer dissatisfaction related to waiting.
3.2 Analysis of Customer Satisfaction Related to Waiting Time Based on the method presented in Section 2, the average satisfaction level of the different customer groups (using express or regular checkouts) can be quantified. If the average waiting times of the different customer groups is known, the customer satisfaction can be calculated according to (6) for both customer groups. Equation (6) uses only common customer characteristics which do not make distinctions between customers buying small and large amounts. Customers using the express and the regular checkouts, however, have different attitudes toward waiting. People’s tolerance for waiting highly depends on the perceived value of the service for which they wait (Maister, 1985). These differences can be taken into consideration in the parameter values used for the different customer groups. Customer satisfaction has a strong relationship with quality. In the case of services, two forms of quality can be distinguished: technical quality (the quality of what is provided) and functional quality (the quality of how it is provided) (Gr¨onroos, 2001). Technical quality or the value of service can be defined as a function of the number of items bought. In this way, the initial satisfaction level (S0 ) can be expressed as the function of the average number of items bought, S0 = s0 × f
∑ i × pi
,
(7)
i
where s0 is the functional satisfaction level (independent of the amount bought), i is the number of items bought and pi is the probability of buying i items. The average satisfaction level of a customer group is determined by three factors. Functional satisfaction is based on the service process (s0 ). Technical satisfaction is determined by the number of items bought (i). These two values define initial satisfaction (S0 ). From this value, the variable service level (S1 ) can be determined by using the average waiting time (t) and the parameter denoting the extent of the customers’ risk averseness (r). The values of these parameters, however, are not the same in the different customer groups: • In the express lines, customers buying only a few items can be found. The application of express checkouts favours them; however, they are not getting a really valuable service. Their functional satisfaction is high, while their technical satisfaction is low. Customers buying only few items and waiting during shorter
126
No´emi Kall´o and Tam´as Koltai
periods are more responsive to the variation in waiting times. Consequently, they are more risk averse than people in regular lines. If the limit value increases, the services become more valuable at these checkouts. In the same time, as the average waiting time increases, the benefit from using express checkouts becomes lower. Technical satisfaction increases, while functional satisfaction and risk averseness decrease. • In the regular lines, customers are waiting to buy large amounts of goods. The service claimed is valuable, but their waiting will be long. Their technical satisfaction is high, while their functional satisfaction is low. Customers buying large amounts and waiting to longer periods are less risk averse than people in express lines. If the limit value increases, the services become more valuable at these checkouts, and the average waiting time decreases. Technical and functional satisfaction, and risk averseness increase. • In a queuing system without express checkouts, customers buying small and large amounts are waiting in common lines. As the application of express checkouts expresses the managers’ commitment to waiting reduction, the functional satisfaction level in a queuing system without express lines is lower than in a queuing system with express lines. The average number of items bought is higher in a system without special servers than the average number of items bought in the express lines and lower than the average number of items bought in the regular lines. Accordingly, the technical satisfaction and risk averseness is between the values valid for the two customer groups. Based on these findings, different parameter values (s0 , r) have been assigned to each customer group. Using the results of the analytical formulae (with the help of the numerical model), the average satisfaction level with different parameter values has been calculated. The results presented in this paper were obtained using functional satisfaction value 10 for a queuing system without express checkouts (s0 ). As the application of express checkouts is used for reducing waiting times, the functional satisfaction of customers in express lines is assumed to be 8 (s0E ) and the functional satisfaction of customers in regular lines is assumed to be 9 (s0R ). The risk averseness of all customers (r) is indicated by 1. People in express lines are less tolerant toward waiting; therefore their risk averseness (rE ) is assumed to be 1.2. The risk averseness of the more tolerant customers in regular lines (rR ) is assumed to be 0.8. Although there are several parameter values which are in accordance with the consideration above, similar results have been obtained independently of the actual parameter values. That is, the main conclusion is not sensitive to the change of parameter values. The average satisfaction level as a function of the limit value can be seen in Fig. 1. Satisfaction in a queuing system with express lines is influenced by the limit parameter, while in a queuing system without special servers the value of average satisfaction is constant. Figure 1 shows that that average satisfaction level in an express line system has a distinct maximum. The limit value inducing maximal satisfaction can be used to
Average satisfaction
Increasing Customer Satisfaction in Queuing Systems with Rapid Modelling
127
0 -0.2 Queuing system with express lines Queuing system without express lines
-0.4 -0.6 -0.8 -1 -1.2 1
4
7
10
13
16
19
22
25
Limit value Fig. 1 Average waiting time and average satisfaction as functions of the limit parameter (s0 = 10, r = 1; s0E = 8, rE = 1.2; s0R = 9, rR = 0.8)
optimize operation. Based on Fig. 1, it can also be concluded that using a limit value which is near to the optimal value causes no significant decrease in satisfaction. Comparing the satisfaction levels in queuing systems with and without express lines, it can be concluded that, with an appropriate limit value, the application of express lines may increase average satisfaction. Moreover, if not the optimal but a near-optimal limit value is used, the satisfaction will be higher than in the original system. That is, if the optimal limit value cannot be applied (e.g. for some operational reasons), it will not have serious consequences. It can also be seen in Fig. 1, that if a limit value significantly different from the optimal one is applied, the average satisfaction will be lower than in the original system. That is, managers have to know at least roughly the optimal limit value if they do not want to decrease customer satisfaction to a great extent. The analyses of the average service levels has proved that, in terms of satisfaction, higher service level is offered in queuing systems with express checkouts than in systems without express checkouts. In our former analyses, we have found that, based on different waiting time measures (the mean and standard deviation of the actual and perceived waiting times), the application of express checkouts cannot decrease customer waiting significantly (Kall´o and Koltai, 2009). These waiting measures are lower in queuing systems without express lines than in express line systems – even if they are operating with optimal limit values. Therefore, the application of express checkouts cannot be justified by the decrease of average waiting time. Despite this finding, the analysis of satisfaction levels has proved that customer satisfaction can be improved with express checkouts. These results are illustrated with Fig. 2. Figure 2 describes the difference between the performance of queuing systems with and without express checkouts. In the terms of waiting times (gray solid curve and line), a queuing system without express checkouts performs better than an express line system – using any limit value. In terms of satisfaction (black dashed curve and line), an express line system – using optimal or near-optimal limit value – performs better than a queuing system without special servers.
No´emi Kall´o and Tam´as Koltai Average waiting time / Average satisfaction
128
Average waiting time with express checkouts Average waiting time without express checkouts Average satisfaction with express checkouts Average satisfaction without express checkouts 1
3
5
7
9 11 13 15 17 19 21 23 25 Limit value
Fig. 2 Average waiting time and average satisfaction as functions of the limit parameter (s0 = 10, r = 1; s0E = 8, rE = 1.2; s0R = 9, rR = 0.8)
Based on the results expressed by Fig. 2, it also can be concluded that using an objective function maximizing customer satisfaction does not refine the optimum obtained by average waiting time minimization. Independently of the applied objective functions (minimization of average waiting time or maximization of average satisfaction), the optimal limit value is equal to two (L = 2). Consequently, there is no need for applying difficult objective functions and complex tools to determine the optimal operation of an express line system. The minimization of average waiting time determines the optimal operation. Optimal operation, however, does not necessarily lead to the decrease of the average waiting time, but definitely leads to the improvement of customer satisfaction.
4 Conclusions To keep their competitiveness, companies have to increase customers’ satisfaction. In the era of time-based competition, customer satisfaction should be defined as a function of waiting. Waiting times can be analyzed with the help of queuing theory. This paper shows that the results of rapid modelling can be used to analyze customer satisfaction. Our analysis has proved that customer satisfaction can be approximated with utility functions and established a theoretical framework for utility transformation of waiting times. Our results underline that express line systems using appropriate limit value can increase customers’ satisfaction. That is, companies can gain competitive advantage by applying express checkouts. Comparing the effects of the application of express checkouts on average waiting time and on average satisfaction it can be concluded that the main benefit of express checkouts is not the decrease of average waiting time but the better allocation of short and long waiting times among the different customer groups. That is, even in time-based competition, overall waiting time minimization is not necessarily the main objective. In some cases, the better allocation
Increasing Customer Satisfaction in Queuing Systems with Rapid Modelling
129
of short and long waiting times among the different customer groups can have more favourable effects. Theoretically the objective is to increase customer satisfaction. Technically, however, the optimization (minimization) of the average waiting time leads to the maximization of customer satisfaction. Therefore, (6) is necessary only for justifying the usefulness of express checkouts. During the daily operation, however, queuing formulae developed for the calculation of the average waiting time can be used. The results of the analysis of express line systems presented in this paper can be used in other areas as well. Several other line structuring rules can be analyzed in service systems (lines for business class passengers in airports, privileged customers in special services, etc.), and the application of line structuring rules can be extended to production systems as well. These are topics of further researches.
References Bhat U (1969) Sixty years of queueing theory. Management Science 15(6):280–294 Bhat U (2008) An introduction to queueing theory. Birkhauser Gr¨onroos C (2001) Service management and marketing. John Wiley & Sons, Inc. Kall´o N, Koltai T (2009) Rapid modeling of express line systems for improving waiting processes. In: Reiner G (ed) Rapid Modelling for increasing competitiveness, Springer Keeney R, Raiffa H (1993) Decisions with multiple objectives. Cambridge University Press Kimura T (1983) Diffusion approximation for an M/G/m queue. Operations Research 31(2):304–321 Kleinrock L (1975) Queueing systems, volume I. John Wiley & Sons, Inc. Koltai T, Kall´o N, Lakatos L (2008) Optimization of express line performance: numerical examination and management considerations. Optimization and Engineering 10(3):377–396 Kumar P, Kalwani M, Dada M (1997) The impact of waiting time guarantees on customers’ waiting experiences. Marketing Science 16(4):295–314 Leclerc F, Schmitt B, Dube L (1995) Waiting time and decision making: is time like money? Journal of Consumer Research 22(1):110 Maister D (1985) The psychology of waiting lines. In: Cziepel J, Solomon M, Surprenant C (eds) The service encounter, Lexington Books R´enyi A (1956) A Poisson folyamat egy jellemz´ese (A possible characterization of the Poisson process). MTA Mat Kut Int K¨ozl 1:519–527 Stordahl K (2007) The history behind the probability theory and the queuing theory. Telektronikk 2:123–140 Suri R (2009) A perspective on two decades of Rapid Modeling (foreword). In: Reiner G (ed) Rapid Modelling for increasing competitiveness, Springer Sz´antai T (1971a) On an invariance problem related to different rarefactions of recurrent processes. Studia Sci Math Hungarica 6:453–456
130
No´emi Kall´o and Tam´as Koltai
Sz´antai T (1971b) On limiting distributions for the sums of random number of random variables concerning the rarefaction of recurrent processes. Studia Sci Math Hungarica 6:443–452
Rapid Modelling of Patient Flow in a Health Care Setting: Integrating Simulation with Lean Claire Worthington, Stewart Robinson, Nicola Burgess and Zoe Radnor
Abstract This paper provides an evaluation of an experiment in using discrete event simulation modelling at a Rapid Improvement Event in a large hospital trust. It presents empirical findings about the challenges of building a model rapidly within a time constrained event. The aim was also to help introduce Lean principles so the learning and understanding acquired by participants, their interaction with the model, and the experimentation promoted and facilitated by the model are all considered. Our learning from this action research about rapid modelling and the modifications made to our approach as a result of the experiment are described.
1 Purpose This paper explores and reports the use of discrete event simulation (‘simulation’) modelling as a part of a Rapid Improvement Event (RIE) in a hospital trust. The work is part of a bigger project that seeks to investigate how simulation modelling can support and sustain Lean improvements in health care. Our research posits three
Claire Worthington (B), Stewart Robinson, Nicola Burgess and Zoe Radnor Warwick Business School, University of Warwick, Coventry, CV4 7AL, UK, e-mail:
[email protected] Stewart Robinson e-mail:
[email protected] Nicola Burgess e-mail:
[email protected] Zoe Radnor e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 10,
131
132
Claire Worthington et al.
potential applications of simulation modelling in relation to Lean implementation as illustrated in Fig. 1: • Educate: simulation provides a basis for training in Lean ideas e.g. process flow • Engage/facilitate: simulation is used as part of a Lean improvement initiative to better understand the issues and to help identify Lean improvements. • Experiment/evaluate: simulation is used to determine the effectiveness of Lean improvements that emerge from a Lean initiative. The nature of these activities suggests an ordering of, respectively, before, during and after the Lean initiative.
During Educate
Lean Initiative
Experiment/ Evaluate
Engage/ Facilitate
Before
After
Fig. 1 Simulation modelling and lean initiatives
There is much published work on discrete event simulation in health applications (as reviewed by Jacobson et al 2006) but this mainly concentrates on the evaluation/experimental stage, with the simulation usually being built for the practitioners by external consultants. Although a lot of models have been built, barriers exist to the implementation of their findings or their sustained use (Mahachek 2002 and Lowery 1996). Our approach is to use discrete event simulation modelling during a Lean initiative (Engage/Facilitate) where the model is built with the practitioners. We present the findings of our action research case study on the importance of this approach in relation to the learning and understanding acquired by participants, their interaction with the model, and the experimentation promoted and facilitated by the model. We also present our findings in relation to the challenges of building a model rapidly within a time constrained event.
2 Approach The first phase of our research has sought to characterise the nature of complexities within healthcare in relation to Lean methodologies and explore if and how simulation modelling can support Lean implementation. This has been done by means of interviews (approximately 50) with a cross section of staff in two case study hospital trusts. Our empirical findings suggest that simulation modelling can play a
Rapid Modelling of Patient Flow in a Health Care Setting
133
potentially key role in supporting and sustaining Lean implementation in a hospital trust (Burgess and Radnor, 2010). The process mapping activity, which forms the basis of a majority of Lean led RIEs (and other Lean interventions), is a pivotal experience in which participants can see the whole process and appreciate their role within it, sometimes for the first time. This activity is often conducted using brown paper and post-it notes in a workshop setting; typically over 2 days. We use simulation modelling at this point to elevate (but not replace) the brown paper exercise to a new level in which key metrics such as queuing or utilisation are recorded together with cycle and throughput times (McDonald et al, 2002). The simulation needs to be developed rapidly as the RIE is time constrained. The fidelity of the simulation must be sufficient for the practitioners’ credibility and for evaluation of suggested changes to the existing process, but low enough for speedy model development. This situation lent itself to an action research study (Coughlan and Coghlan, 2002) as we were actively involved in contributing to improving the process. We had the opportunity to try this approach by attending an RIE at a 700 bed hospital in NW England serving a population of 265 000. This opportunity occurred at very short notice. The RIE lasted 4.5 days (from 08.30 to 16.30) with the final morning being used to report back to senior management at an outbrief session. Fifteen members of hospital staff and one Governor attended the RIE. Consultants, ward clerks, nurses (community and ward) and managers (nursing, operational and clinical) were all represented. The RIE was facilitated by two of the hospital’s Lean facilitators and two researchers attended as observers, participants and modeller. The issue of concern was their paediatric observation and assessment unit (OAU) where it was felt that too often, children (and carers) had to wait for unacceptably long periods of time for tests, treatments and decisions. The problem was exacerbated because there was no separate room for waiting so the children’s playroom had to be used. The layout is displayed in Fig. 2.
Play Room + Wait Area
Triage Room Ward Clerk
Treatment Room
Fig. 2 OAU Layout
During the first morning of the RIE participants were divided into groups and visits were made to the paediatric OAU. One group spoke to patients and their carers to gather the opinions of the patients. We were asked to identify the paths followed by patients through the unit as a sequence of tasks and for each task to record esti-
134
Claire Worthington et al.
mates of the ‘touch’ times, resource requirements and their availabilities and waiting times. This information was then used to build up a post-it process map of OAU as displayed in Fig. 3.
Fig. 3 OAU post-it process map
Children arriving at the Unit checked in with the Ward Clerk, waited in the play room until they could be triaged in the triage room and then they waited again in the playroom until one of the 6 beds was free for them to be assessed. Following this they returned to the playroom to await the results of tests, or the availability of a doctor and a bed for a decision on admission, discharge or further tests. As in Ntungo (2007) building the process map revealed features of the process that had not been known or appreciated by all, it was a useful focus for discussion, clarification and agreement on the ‘as is’ process for those who worked within it and for us, a useful accepted current state process map of OAU. While the RIE participants considered where there was waste in the current process, the post-it diagram was typed into Microsoft Visio as a flowchart; the display was chosen to resemble the post-its. This diagram was then imported into Simul8 (the package used for the discrete event simulation: www.simul8.com). Some Visio shapes, such as processes, were recognized by Simul8 and treated as ‘Work Centres’ while others, such as storage icons, had to be manually altered within Simul8. The information collected for the process map included estimates of touch times (service times) for each task so these provided parameters for the service time distributions in the simulation. Triangular distributions were used for bounded processes and Erlang3 used for other process times. The number of children arriving each day was also available and in this initial model an exponential distribution with a constant daily mean was used to represent arrivals to OAU. This model was presented to the RIE at the start of day 2. It served as a useful reminder of what they had agreed the previous day. An explanation was given of
Rapid Modelling of Patient Flow in a Health Care Setting
135
how the model corresponded to the post-its before it was run, and then examples of modifications that could be easily made (e.g. altering the number of beds) were demonstrated. Assumptions that had been necessary were explained and agreed or modified. Data, essential for the simulation that had not been provided, were discussed and values agreed. These missing data were mainly the proportions going along the different routes in the process map, as indicated by question marks in Fig. 4. Radiologist + nurse + porter
Nurse 0 0
17 0
0
0 ?
1 0
Radiology 40-60
0
34 Referred GP M/W AE
Snr Doc 1
0
Treatment started drugs oxygen
Triage5 mins
Arrive on G5
16
?
0
? To bed or cubicle 1 min
? 0
?
Seen by J Dr or ANP 15-30 mins Ward clerk nurses bed bureau
Treatment room blood terst 20-60
? Qualified Nurse
J DR ANP
0
Snr Dr Review 15 mins
0
?
? ?
Nurse ANP + dr Playworker
0
Fig. 4 Simulation model of OAU
Face validation was then undertaken and it included consideration of the routes that children followed through the model, the percentage of patients taking at least 4 h or 6 h to get through OAU, inspection of the charts of utilization of beds and numbers in the playroom. Agreement was reached on modifications that were to be incorporated into the simulation for consideration the next day (day 3). These fell in the following four areas and included: input values – refined to reflect the varying hourly arrival rates result collection – included the no. in the playroom and times in OAU by hour alternative scenario – triage performed on a bed: – senior decision maker absent; – one bed reserved for admissions from ED; simulation display – queue to display people It is worth noting that our approach followed the usual protocol of an action research study with day 1 providing the pre-set up where the aim was to understand the context and purpose of the study. Information was gathered, analysed and fed back on day 2 with an iterative process following, in which options for improvement and implementation were evaluated.
136
Claire Worthington et al.
3 Findings Our findings from this action research are considered from the perspectives of the hospital, simulation modelling and our contribution to the Rapid Improvement Event.
3.1 Hospital Findings When the simulation model was displayed for the first time all participants at the RIE engaged with it, to the extent that all were impressed by its complexity i.e. the complexity of the process they worked with every day. It was the managers including the consultants, however, who quickly appreciated the simulation’s usefulness in evaluating alternatives and also its dependence on the data provided as input. This was also apparent in the engagement with the results presented on the 3rd day, when understandably those who had suggested alterations were more engaged than those who had not. It is worth pointing out that the main aspects that the RIE had identified for consideration were applying 5S to the room where triage took place (i.e. sorting it out so that work could be done, according to agreed procedures in well ordered surroundings), providing visual management boards to monitor the progress of patients through the OAU and thinking about whether and what treatment ‘tollgates’ could be established. Thus flow improvement was not the explicit focus of this event. Ward level staff were therefore engrossed in the practicalities of improving their every day circumstances so, it was primarily the managers who showed pleasing understanding, interaction and experimentation with the simulation model. As a result of this two rapid experiments were run based on the simulation. In the first, the patient goes straight to a bed for triage and stays there until the decision is taken that leads to departure from OAU. In the second experiment one bed is reserved for the exclusive use of patients coming from ED, as these are usually the more ill children in OAU. The simulation was described as a ‘myth buster’ as it showed that these alterations had much less impact on performance than expected. Additionally a more detailed model of part of the process is to be built and is an example of post initiative evaluation/experimentation.
3.2 Simulation Modelling Findings From the perspective of the simulation model our findings mainly refer to the need for speedy (and accurate) development of the post-it diagram and clear communication of results. Although each situation will differ, there are some standard features that will be required. The development of templates for input data
Rapid Modelling of Patient Flow in a Health Care Setting
137
(arrivals, tasks, resources etc.) and results calculation (e.g. time in system) is seen as necessary. We found that the feature linking Visio to Simul8 did not provide any particular advantage. Indeed, it slowed the process of model creation down. As a result, in future events we will directly convert the post-it process map to a Simul8 model. The speed of model development adopted in this RIE allowed little time for checking, or thinking, thus the approach required an experienced, careful modeller so that the simulation was ‘good enough’ but denied the modeller (and the model) chance to contribute all that they might. The way that results are presented can affect their availability and usefulness to Lean intervention participants. The choice of information for feedback and how best to display these results therefore needs careful consideration, and by doing it quickly opportunities may have been lost.
3.3 Event Findings Our initial ideas for simulation modelling contributing to Lean initiatives are illustrated in Fig. 1. It has separate pre, during and post roles for simulation. In our experiments we have found that these roles are not discrete, for instance it is possible to educate during a Lean Initiative. So, for instance, education about flow behaviour that is pertinent for Lean improvement could be undertaken with simulation models and presented during a Lean initiative. Consequently we have removed the time dimension from our initial diagram (Fig. 5). The arrows now denote the flow of information between the simulation and the Lean initiative (inputs).
Educate
LEAN INITIATIVE
Experiment/ Evaluate
Training or interlude SimLean - basic
Engage/ Facilitate SimLean - full
Legend Input
Fig. 5 Simulation and Lean initiatives – revised
In terms of the educational role of simulation, we still consider that this could entail training events prior to a Lean initiative, but it could also involve brief ‘educational interludes’ during a Lean initiative. The idea would be to use simulation to convey particularly challenging concepts during a Lean intervention using predesigned working models. For instance, a general model of a hospital ward could be
138
Claire Worthington et al.
used to demonstrate the need to balance the output rate with the input rate, something that hospital staff do not always grasp as being essential. These educational interludes could be adopted at any point during a Lean intervention when the need to understand an issue demonstrated by a model arose. As a result of this, we have identified two styles of simulation use during a Lean intervention (referred to as SimLean): (a)
(b)
SimLean (basic): for a basic event where the concentration is on visual management, 5 or 6S and working towards standardized working practices. In these events the post-it map and discussion will be used as triggers for educational episodes to show for instance, the effect of not matching resources to input or the effect on performance of variability in demand. The post-it map may not be modelled – when it is, it would be for illustrative purposes and not for detailed consideration of generated results. Fidelity of model and data have less importance here than in style (b). SimLean (full): for an event where the focus is on the flow of patients, the postits will be transformed into a simulation model with the close involvement of participants and their commitment to provide necessary data for input and validation. The building of the model will be expedited by the use of preprepared input templates (set up in Excel as this package is readily available); pre-prepared charts clearly displaying default performance measures; and predesigned components of commonly used Simul8 objects. The participants in the Lean intervention will be involved in the iterative process of developing and evaluating current and alternative scenario.
The structure of these proposed SimLean events is given in appendices 1 and 2 respectively. We see these as two extremes on a continuum of simulation use during Lean interventions. We are planning to implement SimLean later this year.
3.4 Summary Where the simulation model only adds animation to the post-it process map it is doubtful that the effort involved is warranted in terms of the benefits that can reasonably be expected. Despite this we have learnt from our rapid experiment in that we now have much greater knowledge about the 3Es (Educate, Engage, and Experiment). In particular we have better insight into: • what to build for Educate i.e. what flow concepts are found challenging to convey through Lean training; • what kind of event is appropriate for simulation model Engagement; • how to allow the participants to choose the next steps on their terms i.e. how to refine the model for Experiment. Developing these ideas and opportunities could be instrumental in making significant and sustainable improvements that would not have taken place if simulation modelling were not employed (Robinson, 2001).
Rapid Modelling of Patient Flow in a Health Care Setting
139
4 Relevance/Contribution This paper provides an evaluation of our rapid experiment of using simulation modelling at a Rapid Improvement Event in a large hospital trust. By adopting an action research approach we have been able to modify our approach to after an evaluation of our initial methodology. From a Lean perspective we have positive indications that it is useful to incorporate simulation modelling into the Lean improvement process. The insights that it gives into the behaviour of interacting patient flows through the care process by evaluating comparative output measures are seen to be valuable. Our work in developing the SimLean approach will contribute to this opportunity. From a modelling perspective we have experienced working in an environment where the participants decide on the next steps to be taken, even when they are not the “experts”’ choice. The resulting ownership of outcomes and results engenders interest in the continuing success of the shared process. This approach is worthy of consideration by modellers who have developed many models of health applications which have resulted in many interventions but relatively few have resulted in sustained improvement (Brailsford et al, 2009). In terms of rapid modelling, we have identified two styles of simulation use during a Lean initiative: SimLean (basic) and SimLean (full). Given the need to implement these models during a Lean improvement event, we need to be able to facilitate rapid model development and use. This can be achieved through the use of pre-designed models and model and data templates related to the healthcare and Lean improvement environment. Our work to date has provided evidence of the potential importance of this approach in leading to implementation of improvements in process flows which follow Lean principles by developing simulation models with the practitioners. Acknowledgements The financial support of the Strategic Lean Implementation Methodology Project (SLIM) (www2.warwick.ac.uk/fac/soc/wbs/projects/slim) is acknowledged. SLIM is funded by the Warwick Innovative Manufacturing Research Centre. The helpful comments from referees are also acknowledged. This paper is adapted from Worthington, C. and Robinson, S.: Integrating simulation with Lean: Rapid low fidelity modelling in healthcare: Submitted to EUROMA 2010.
140
Claire Worthington et al.
Appendix 1: SimLean (basic): Educational Lean implementation at a basic level; the focus of the RIE is likely to be in creating the foundational conditions for sustainable Lean implementation: mapping the process, identifying non-value adding steps, introduction of visual management tools and 5S. The contribution of simulation at this stage is ‘seed corn’ i.e. to introduce a simple simulation of in-out flows. Emphasis necessary on visual impact to engage (based on early experiments). The objective is to nurture a sense of what simulation could be used for and also to help embed the key principles of Lean such as pull and flow.
SimLean – an introduction 30 mins. eg. Illustration of a busy Assessment Unit
Introduction and welcome to RIE
Organisation’s process
Day 1 Data collection by participants (Guidance)
Process Map (post-its)
Educational Interlude –30 Probably Day 2
Legend
mins. Generic illustrations designed to illustrate Lean principles
Total intervention: 1 hour
Lean Thinking Value, Flow, Pull, Perfection
SimLean intervention
SimLean intervention (behind scenes)
Rapid Modelling of Patient Flow in a Health Care Setting
141
Appendix 2: SimLean (full): Interactive/Participatory This level of SImLean is suitable for Lean implementation where expected outputs are likely to be: changes to process, resources and facility configuration. Participants should be decision makers eg. A departmental manager, general manager, executive manager, consultant, nurse etc.
SimLean – an
Day 1
Data collection by participants (Guidance)
introduction 30 mins. A generic illustration eg. Increase number of beds
Process Map (post-its)
Data critical, if not robust or more is needed, go back to data collection stage
Simulate (Behind scenes)
Validate
SimLean – scenario testing approx 1 hour. Some generic examples if necessary eg. If we took out this process step, this is what happens to flow
Lean Thinking Value, Flow, Pull, Perfection
Implement Total intervention: 3 hours plus
Scenario testing to be facilitated either during the SimLean interlude or when the model is ready the next day
142
Claire Worthington et al.
References Brailsford S, Harper P, Patel B, Pitt M (2009) An analysis of the academic literature on simulation and modelling in health care. Journal of Simulation 3(3):130–140 Burgess N, Radnor Z (2010) Lean implementation in health care: Complexities and tensions. In: Euroma, Porto Coughlan P, Coghlan D (2002) Action reseach for operations management. International Journal of Operations and Production Management 22(2):220–240 Jacobson S, Hall S, Swisher J (2006) Discrete-event simulation of health care systems. Patient Flow: Reducing Delay in Healthcare Delivery pp 211–252 Lowery J (1996) Introduction to simulation in health care. In: Proceedings of the 28th conference on Winter simulation, IEEE Computer Society, p 84 Mahachek A (2002) An introduction to patient flow simulation for health-care managers. journal of the Society for Health Systems 3(3):73–81 McDonald T, Van Aken E, Rentes A (2002) Utilising simulation to enhance value stream mapping: A manufacturing case application. International Journal of Logistics Research and Applications 5(2):213–232 Ntungo C (2007) Quality culture in government: the pursuit of a quality management model. Total Quality Management & Business Excellence 18(1):135–145 Robinson S (2001) Soft with a hard centre: discrete-event simulation in facilitation. Journal of the Operational Research Society 52(8):905–915
Part IV
Rapid Modelling and Financial Performance Measurement
Evaluation of the Dynamic Impacts of Lead Time Reduction on Finance Based on Open Queueing Networks Dominik Gl¨aßer, Boualem Rabta, Gerald Reiner and Arda Alp
Abstract The basic principles of rapid modelling based on queueing theory, that provide the theoretical foundations for lead time reduction, are well known in research. We are globally observing an underinvestment in lead time reduction at top management levels. In particular, the maximization of resource utilization is still a wide-spread aim for managers in many companies around the world. This is due to inappropriate performance measurement systems as well as compensation systems for managers which neglect the monetary effects of lead time reduction. Therefore, we developed a model based on open queueing networks to evaluate the financial impacts of lead time reduction. Illustrated by an empirical case from the polymer industry, we will demonstrate the impact of performance measures on financial measures. That is why we will take into consideration efficiency performance measures (work in process, lead time, etc.) as well as effectiveness performance measure (e.g., customer satisfaction, retention rate). Based on our evaluation model, we will be able to investigate different scenarios to reduce lead time for the given case and evaluate these, based on the developed overall performance measurement model, i.e., optimization of the batch size, resource pooling, de/increase in the number of resources. In particular, we achieved a 75% lead time reduction and a 11% overall
Dominik Gl¨aßer (B), Boualem Rabta, Gerald Reiner and Arda Alp Institut de l’entreprise, Universit´e de Neuchˆatel - Rue A.-L. Breguet 1, CH-2000 Neuchˆatel, Switzerland, e-mail:
[email protected] Boualem Rabta e-mail:
[email protected] Gerald Reiner e-mail:
[email protected] Arda Alp e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 11,
145
146
Dominik Gl¨aßer et al.
cost reduction (resource costs, setup costs, WIP costs, penalty costs, inventory costs) without changing the whole production layout or making high investments. Key words: Queueing Networks, Rapid Modelling, Manufacturing Systems, Lead Time Reduction, Financial Evaluation
1 Introduction It is well known that lead time reduction will increase the competitiveness of companies as well as supply chains in many industries (Suri et al, 1993; Suri, 1998; de Treville et al, 2004). Long lead times that decrease customer satisfaction are causing companies to lose market shares and are missing opportunities to tailor operations to company needs. Time-based dimensions are critical determining factors for companies in assessing their strategic position and maintaining their competitive move especially if they have to operate in an agile supply chain environment (Naylor et al, 1999; Christopher and Towill, 2000; Mason-Jones et al, 2000; Lee, 2002). Regarding time-based competition (Stalk and Hout, 1990; Askenazy et al, 2006), lead time reduction is one of the leading and effective mechanisms. Furthermore, supply chains are filled with useless inventory when lead times are long. Unfortunately, despite this demonstrated importance of lead time reduction, we are globally observing an underinvestment in lead time reduction at top management levels. Nevertheless, the basic principles of rapid modelling based on queueing theory that provide the theoretical foundations for lead time reduction, are well known in research (see actual developments in Reiner, 2009) but the transfer of knowledge has to be facilitated since e.g., the maximization of resource utilization is still a wide-spread aim for managers in many companies around the world (de Treville and van Ackere, 2006). Cutting lead time is understood by experienced managers but is rarely given sufficient importance. For many key managers, lead time reduction still means working faster, harder and longer in order to complete the job in less time instead of fully understanding the functional dependency between lead times, capacity utilization and variability (Suri, 1998). Many managers believe that machines and employees have to be kept busy to accelerate production which is wrong. This leads to the widely spread aim of maximizing capacity utilization (de Treville and van Ackere, 2006). This behaviour is contrary to the real operations management requirements in many of circumstances, e.g., the importance of cutting lead time becomes even more important if a manufacturer has to act in an agile supply chain environment since the responsiveness to customer needs, plays a decisive role (Christopher and Towill, 2000). Based on the classic formula for calculating safety stock that is used as part of a reorder point replenishment policy, it is evident that delivery time mean and variance directly affect the safety stock level (Silver et al, 1998; Nahmias, 2005). For this reason, long lead times will fill the supply chain with endless inventory
Evaluation of the Dynamic Impacts of Lead Time Reduction
147
that increases the cash-to-cash cycle time (Hammel et al, 2002). On the other hand, reducing the variance as well as average lead time, result in reduced safety stock that is reflected by lower stock keeping costs, without worsening the service level, i.e., the number of stock outs, etc. Another problem in this context is that some managers consider huge batch sizes to be a possibility to increase capacity and to reduce setup costs. Very often, the sophisticated extensions of the economic order quantity (EOQ) (Silver et al, 1998; Nahmias, 2005) are taken to calculate the optimal batch size. However, this fails to cover the “real” costs of huge batch sizes and neglects the importance of responsiveness (Suri, 1998). In general, based on Little’s Law (Little, 1961) it can be demonstrated that cutting cycle times will reduce work in process as well as working capital and finally increases capital turnover and therefore also the ROI (Hopp and Spearman, 2000). These dependencies have to be integrated also into accounting processes (Maskell and Kennedy, 2007). Unfortunately, traditional accounting and reward systems are based on managing scale and costs and disregard the importance of lead time reduction and the interrelations with the overall process performance and financial impacts (Suri, 1998). Traditional cost accounting systems motivate mass-production measurements (e.g., increased labor efficiency, maximized machine utilization). Contrarily, this leads to higher inventories, longer lead times and finally, waste of the overcapacity and inventory which is opposed to lean thinking, etc. However, companies applying modern operations management approaches should use value stream costing rather than traditional-based (e.g., ABC) based costing (Maskell and Kennedy, 2007; Van der Merwe, 2008). This also provides better-decision making, i.e., wrong simple short-term, cost-focused decision support with traditional systems should be avoided. For instance, as we have mentioned above, classical machine utilization maximization motivates people to reduce excess capacity and in parallel, keeps machines busy while resulting in excess inventories. Contrarily, modern key operations management principles place emphasis on creating and keeping capacity and use this capacity for growth or hedging uncertainties. Product costs should be highly related to process flow and thus one of the key factors of success (controlling speed and improving efficiency) is to control the flow (Johnson, 2006; Maskell and Kennedy, 2007; Maynard, 2008; Van der Merwe, 2008). As we have discussed above, underlying system dynamics and relations between utilization, lot size, variability as well as layout to lead time and finally revenue as well as costs are not straightforward and difficult to comprehend (de Treville et al, 2009). In our paper we will provide an evaluation framework which takes into consideration these dependencies and the financial implications. The investigations will be carried out on the basis of quantitative models illustrated by empirical data. With this open queueing model it is possible to reproduce causal correlations between the production and the financial performance measures. In order to give a precise evaluation of the financial effect of lead time reduction, we determine the side effects of changes in lead time. Compared to previous studies, where the benefits of lead time reduction is only shown through few cost measures, our approach provides a more general framework by considering all cost components that are directly affected
148
Dominik Gl¨aßer et al.
by lead time reduction. Furthermore, our framework considers feedback of system changes, i.e., the reciprocal effect of the input (e.g., capacity) and the output (lead time, cost, service level. . . ) on each other. We describe the importance and the interactions between performance measures in Section 2. Section 3 explains the evaluation frame as well as the underlying open queueing model. In particular, the links between financial and production performance measures are described. Section 4 shows a numerical illustration from the polymer industry. Additionally, a guideline of how it is possible to reduce the lead time within a production facility is highlighted. We will provide also some techniques to reduce lead time in a fast and efficient way. We will optimize the overall system performance by reducing the overall lead time without changing the complete production design. Afterwards, the results are presented. In Section 5 a conclusion and outlook for further research is given.
2 Performance Measurement Traditionally, businesses have used financially orientated measures of business performance. Dissatisfaction with these cost-accounting-based performance measures is increasing, because they are somewhat obsolete as their focus is too narrow, and they fail to provide insight into the real drivers of business performance, i.e., information on what customers want and how competitors perform (Ghalayini and Noble, 1996; Beamon, 1999; Neely, 1999; Li et al, 2007). Many indicators, such as knowledge, competencies, employee satisfaction and customer loyalty, are intangible and, hence, difficult to measure. Financial performance measures like DuPont’s return on investment (ROI) (Bodie et al, 2003) can only be a starting point for the development of extended performance measurement models (Kaplan and Norton, 1997). This is crucial because several companies still believe in the superiority of low-cost-production strategies based on cost-accounting-based performance measures but they should know more about ‘time’ and time-based performance metrics as well as their dependencies with financial performance measures, customer satisfaction, etc. Neely (1999) highlighted also that performance measures are not standalone and they are interrelated with each other. Therefore, the interactions among various supply chain dimensions and characteristics such as financial (e.g., cost) and non-financial ones (e.g., quality, speed, dependability, flexibility) should be considered (Beamon, 1999; Slack and Lewis, 2007). Reiner and Hofmann (2006) pointed out that there are dependencies between the performance measures such as operational performance and financial success, that have to be considered. Also Hill et al (2002) mentioned that matching customer demand (marketing view) and capacity (operational view) has an effect on financial performance. In this sense further benefits are possible only when companies reach the reconciliation of market requirements (i.e. customer demand, customer satisfaction) with operations resources. Successful managers have to consider these interactions and dependencies (Gr¨unberg, 2004) but it is not always entirely possible to predict these relation-
Evaluation of the Dynamic Impacts of Lead Time Reduction
149
ships and their effects on the company’s performance with classical approaches. For instance, imprecise changes on ‘utilization’ (e.g., un-reflected downsizing, overproduction) driven by cost-reduction programs, may end up with a negative effect on lead time. As Li et al (2007) stated, modern time-based performance measurement systems need to go beyond. It is necessary that enhanced performance measurement systems take into account time frame and integration of continuous improvements (Ghalayini and Noble, 1996; Neely, 1999; Neely et al, 2000). Thus performance measurement by using system dynamics are necessary (Santos et al, 2002; Zheng and Lai, 2008).
3 Evaluation Framework The evaluation framework describes the dependencies between cost drivers, costs, revenue drivers, revenue and lead time. We take into consideration efficiency performance measures (utilization, costs, etc.) as well as effectiveness performance measures (e.g., customer satisfaction, lead time). Figure 1 depicts the evaluation framework which is built based on the aforementioned dynamic dependencies between performance measures for a make-to-stock strategy. We selected a make-tostock strategy because it is still the dominating strategy for the majority of product supply chains (Hofmann and Reiner, 2006). It also comprises cost factors that are highly interlinked with each other but in classical accounting methods there are only limited approaches to take these phenomena into consideration. In particular the right part of the evaluation framework (lead time) is often completely neglected. Nevertheless, effectiveness performance measures influence the revenue and are affected by the efficiency performance measures and vice versa. In particular, there is a tight relationship between lead time and work-in-process inventory (Little’s law) and hence inventory costs. Also, we observe that reducing lead time in a make-to-stock environment will induce the reduction of safety stock levels of finished products without reducing the customer service level. Again, there is a reduction in inventory costs but also an increase in the level of service, customer satisfaction, customer retention and reduction of penalty costs. Our study will focus on the overall cost and the lead time of the above mentioned framework (Fig. 1). The results calculated will not represent all potential financial performance improvements, i.e., the overall impact will be underestimated in terms of revenue increase. However, the presented framework is the first step towards a comprehensive evaluation. We are able to demonstrate how performance improvements can be achieved by focusing on a set of objectives and exploiting above mentioned trade-offs between performance metrics. This is a consensus concerning the development of performance measures and analyzing the changes on the company performance. We will select open queueing network models to provide partially more insight into this complex behaviour. These models of production processes are able to estimate the relevant output performance measures (e.g., utilization, lead time, WIP,
150
Dominik Gl¨aßer et al.
Fig. 1 Evaluation framework for a financial evaluation of lead time reduction for a make-to-stock environment
operating expenses, etc.), performance metrics trade-off characteristics and tradeoff effects (cause-and-effect relationships). Furthermore, open queueing networks allow us to consider variability of the process parameters. Variability is an important source for delays in the process (Hopp and Spearman, 2000). The role of queueing theory in the analysis of manufacturing systems is well-established (see for instance Suri et al, 1993; Govil and Fu, 1999; Bolch et al, 2006; Shanthikumar et al, 2007, for detailed reviews of analytical models and the use of queueing theory in manufacturing). In detail, the analysis methodology is based on queueing networks decomposition methods with several steps of aggregation to deal with typical manufacturing features. Each station is represented by a G/G/m queueing system. Products pass through the production process according to predefined routes. This approach has been developed by Kuehn et al (1979) and Whitt (1983) amongst others (see Rabta, 2009, for a review of decomposition methods for open queueing networks). Based on our evaluation framework, we will be able to investigate different scenarios to reduce the lead time (optimization of the batch size, resource pooling, de/increase in the number of resources) and evaluate the overall performance. In particular, our analysis on interaction between ‘time’-based performance metrics and non-time-based ones as well as financial performance metrics will provide a more comprehensive overview of trade-off characteristics and the impact of those
Evaluation of the Dynamic Impacts of Lead Time Reduction
151
trade-off effects as well as lead time reduction on company processes and performance. Computations are done using the Rapid Modeler software which was developed at the University of Neuchˆatel (http://www.unine.ch/iene-kje). This software presents a user-friendly interface and a standardized way of modelling manufacturing systems. Its core algorithm uses queueing networks to model the system and provide the results. The user is not enforced to have a deep knowledge in queueing theory. For the evolution of queueing networks software for manufacturing (see Rabta et al, 2009). The software provides estimations for performance measures such as utilization, WIP inventory, lead times etc. which can serve as a basis to the computation of the overall costs.
The Cost Structure The total system cost is evaluated by incorporating the following components: Setup cost: (1) C1 = ∑ ∑ Nk ×Ckm . k m
Inventory cost (WIP):
C2 = ∑ W IPk × hk × T .
(2)
C3 = ∑ LCl × T + ∑ [Um × RCm + (1 −Um ) × ICm ] × T .
(3)
C4 = B× p.
(4)
C5 = ∑ Invk × hk × T .
(5)
C = C1 +C2 +C3 +C4 +C5 .
(6)
k
Machine and labor cost:
l
m
Penalty cost: Inventory cost:
k
Total cost: The nomenclature for the equations is as follows whereas all costs are per time unit: T Ckm Nk W IPk hk LCl Um RCm
production period setup cost for a batch of product k on machine m number of setups for product k during the period (total demand/batch size) total WIP for product k holding cost for product k labor l cost machine m utilization machine m running cost
152
ICm B p Invk
Dominik Gl¨aßer et al.
machine m idle cost number of backorders during the period penalty cost per backorder mean inventory level of product k.
4 Empirical Illustration We illustrate the application of the aforementioned evaluation framework with a worldwide operating polymer processing company. This company has to act in an agile supply chain environment. According to Mason-Jones et al (2000), the market winner is specified by the customer service level, whereas quality, costs and lead time are the market qualifiers. Data collection was carried out for a whole division and an entire year. As a result we received 103 different articles for the illustration.
4.1 Data Validation An empirical model is largely dependent on the data upon which it is based as well as on its process design. These are necessary for making sure that the way the model works comes as close as possible to actual observations and processes. Valid data are of crucial importance when setting up a quantitative model. The initial variables, the production parameters and the costs (fixed and variable) have a major influence on the result of the analysis. In order to obtain these data free of organizational barriers, the data triangulation approach is chosen (Croom, 2009). We used data from existing IT systems, putting same questions about various processes as well as about the validation of the data downloads to several employees and conducted unstructured interviews with key users.
4.2 Production Process The process has to be discretized to be able to model complex resources especially if these are able to handle several production steps at once. This is necessary to assign resources to the right manufacturing process. In doing so, it is important to avoid queues in the model which do not exist in reality. Otherwise, the results will not reflect the real process. The resource “extruder” extrudes and packages the product in one step, whereas painting and embossing is part of the extrusion process. The whole process is depicted in Fig. 2. 4 Operators are responsible for the pre-work and handling of 4 extruders. They have to prepare the right tools and mix the colours for the extrusion process. The
Evaluation of the Dynamic Impacts of Lead Time Reduction
Dock
Pre-work
Extrusion
Packaging
153
Transportation
Stock
Scrap Fig. 2 Production process
quality of the extruded product is inspected by the operators who adjust the extruder, if necessary. The packaging of the finished article is done by 3 packers. Subsequently, 3 transfer operators take the products to the inventory.
4.3 Process Improvements For process improvements, we focus on resource utilization and the optimal batch size according to lead time, because these components have the largest influence on the overall lead time. We optimize the total system performance by reducing the lead time without changing the complete production design.
4.3.1 Resource Utilization One of the possible directions for improvement is the identification of good resource utilization in terms of lead time. In particular, we study the effect of the following decisions on the overall lead time by means of the previously described evaluation framework: • Number of employees: we aim to determine the smallest number of workers for each station that is necessary to achieve lead time reduction. • Pooling/specialization of employees: the question is whether to engage few workers with high skills who are able to work on several stations or more workers with only a suitable level of specialization. Highly skilled employees cost more and therefore this step has to be taken by considering the trade-off between the extra labor cost and the gain from lead time reduction. Machines in the systems require the service of employees for setups, loading and unloading. This problem is known as the operator-machine interference (also, the repairman problem, Haque and Armstrong, 2006). The subsystem may be modelled as a closed queueing network and analyzed by one of the exact or approximate algorithms (MVA, convolution, AMVA, summation, . . . (see for instance Bolch et al, 2006). The aim is to determine the main time that a given machine in the subsystem waits for labor service. This ”waiting-for-labor” time is an important component of the lead time and it may be high if the number of workers in the subsystem is not sufficient.
154
Dominik Gl¨aßer et al.
Fig. 3 Pooling: two highly skilled workers responsible for four machines
Fig. 4 Specialization: more but less skilled workers with specialization
The computation of the waiting-for-labor time is integrated in the overall evaluation procedure by adding it to the operations time (waiting-for-labor time + setup time + loading time + run time + unloading time).
4.3.2 Batch Size Optimization Approach Lead time reduction may be achieved by optimizing batch sizes (Vaughan, 2004). The idea behind is that high batch sizes may induce long waiting times, whereas small batch sizes increase the frequency of setups and the total time spent for setups. Batch size optimization tries to make a trade-off between two conflicting goals: reducing waiting times and reducing setup times. Most of the previous works deal with the batch sizing problem in a static environment ignoring therefore the natural variability in the process. Those static models may lead to solutions which are difficult to implement in practice. There are only a few references on batch optimization in stochastic manufacturing systems. Karmarkar (1987) examined the impact of batch sizes and setup times on levels of WIP and lead times by using a queueing model of a single machine. Zipkin (1986) proposed a similar queueing model and used it to model the aggregate behaviour of a batch production facility. The multi-item/multi-machine case was discussed in Kar-
Evaluation of the Dynamic Impacts of Lead Time Reduction
155
markar et al (1985a) and Karmarkar et al (1985b) where a procedure for obtaining optimal batch sizes is also described. Koo et al (2007) proposed a linear search algorithm to find the optimal batch size at the bottleneck station of a manufacturing system. Previous studies describe models simpler than ours and their analysis method is limited and not fit for the purpose of our study. We use the approach described in Rabta and Reiner (2010) where the batch size optimization is done by means of a genetic algorithm in which the evaluation of candidate solutions is performed using queueing network decomposition as explained above. This approach allows us to obtain nearly optimal solutions within reasonable time and computation efforts. The algorithm will determine a near to optimal combination of batch sizes for a high number of products.
4.4 Inventory Situation As we can see from the following figure, the mean inventory level (Q1/2, Q2/2) can be reduced by shortening the replenishment period (R1, R2) (Silver et al, 1998; Nahmias, 2005).
Fig. 5 Stock movement
The mean inventory level for calculating the inventory cost is evaluated by Invk = Qk /2 ,
(7)
where Qk is the production order quantity of product k.
4.5 Scenarios We do not try to mathematically optimize every single part of the evaluation framework, rather than improving the overall system performance. This will give a broader perspective, because feasible solutions to “real” problems are given instead of an optimum for a mathematical abstraction (Silver, 2004). We develop six differ-
156
Dominik Gl¨aßer et al.
ent scenarios to test and evaluate the effect of lead time reduction activities on the total costs of the system under study. The scenarios describe the actual setting, one batch size optimization approach, two resources pooling approaches and two combinations of resources, pooling and optimization of the batch size (see also Table 1). Table 1 Applied scenarios to test and evaluate the impact of lead time reduction on costs Scenario Number Scenario 1: Scenario 1.1: Scenario 2:
Scenario 2.1: Scenario 3:
Scenario 3.1:
Scenario Description It represents the actual setting. The results are the basis to evaluate the improvements of the other scenarios. We perform a batch size optimization according to Sect. 4 on the initial model (scenario 1). Instead of 3 packers and 3 transfer operators, we run the model with 4 workers responsible for packing and transfer as well. Packers and transfer operators have identical skills. That is why they have the same cost structure. We perform a batch size optimization according to Sect. 4 on scenario 2. Instead of 4 operators and 3 packers, we run the model with 5 operators (responsible for pre-work, extrusion and packing). Operators are highly skilled and therefore the most expensive labor at the plant. We perform a batch size optimization according to Sect. 4 on scenario 3.
Based on the unstructured interviews with reliable supply chain mangers, we were able to estimate costs which represent the penalty cost of our evaluation framework. Currently, a customer service level of 80 percent is reached. In the case of stock-out much work has to be done to keep the customer satisfied but it cannot be avoided that some of the supposed customer orders will be lost. According to the interviews, we assume that a reduction in lead time will result in a proportional reduction of penalty cost. Moreover, it is not possible to achieve significant distinguishing characteristics for these products compared to the competitors. Therefore, the customer service level is also a strong indicator for customer satisfaction and customer retention. The variable and fixed components of the involved resource costs (extruder, operator, packer, and transfer operator), setup costs, WIP costs (tied capital) as well as the inventory costs were assessed by studying the underlying cost accountant system. The overall costs are calculated based on the cost drivers provided by our evaluation framework. We run the model for a production period of T = one year (see nomenclature).
4.6 Results The achieved obtained, including the utilization of the bottleneck resource (extruder), are represented in Table 2. All scenarios (sc) are compared with Scenario 1. Scenario 1 provides the basic values which are substituted by 100% for costs and lead time.
Evaluation of the Dynamic Impacts of Lead Time Reduction
157
Table 2 Achieved results Idle time extruder Setup time extruder Run time extruder Wait-for-labor extruder Repair time extruder Total utilization extruder Lead time Resource cost (including setup cost) WIP cost Inventory cost Penalty cost Total cost
Sc 1 17.32% 8.07% 62.82% 10.69% 1.09% 82.68% 100% 100%
Sc 1.1 8.58% 8.68% 67.57% 13.96% 1.2% 91.42% 29.70% 101.4%
Sc 2 17.32% 8.07% 62.82% 10.69% 1.09% 82.68% 94.79% 89.55%
Sc 2.1 7.33% 10.02% 67.57% 14.24% 1.22% 92.67% 29.19% 91.74%
Sc 3 21% 7.93% 62.63% 7.4% 1.04% 79% 91.61% 90.26%
Sc 3.1 10.44% 9.97% 62.63% 11.41% 1.18% 89.56% 24.93% 90.52%
100% 100% 100% 100%
29.70% 35.71% 35.92% 99.28%
94.79% 100% 95.84% 89.77%
29.19% 35.71% 35.76% 89.87%
91.61% 92.86% 88.13% 90.22%
24.93% 28.57% 28.66% 88.47%
For all scenarios with a reduced lead time, it can be assumed that the underlying scrap rate will not change since waiting times are optimized. The processing speed is defined by a chemical process and can therefore not be changed. That’s also the reason why the batch size does not have an influence on the scrap rate. It is possible to shorten the lead time over 70% in Scenario 1.1. Inventory, WIP and penalty costs can be reduced as well. Only the resource costs are slightly higher. This is because of higher frequency of setups and the underlying cost structure (variable and fixed costs) of the extruder. Nevertheless, the total costs are exiguously lower. In Scenario 2 and 2.1 it is possible to achieve lead time reduction and WIP cost savings by 5% respectively 70%. Based on this, it is also possible to decrease the penalty costs by 4% accordingly 65%. The total costs were also lowered. We are able to lower the lead time and WIP cost by 8% and 75% in Scenario 3 and 3.1. As a consequence, the penalty cost savings are also improved by 12% and 71%. The total costs are also improved. The total costs were lowered by 10% and 11%.
5 Conclusion We were able to show the dynamic dependencies between effectiveness performance measures and efficiency performance measures. Based on an appropriate evaluation of lead time reduction, it is not only possible to reduce the WIP and the overall system costs but also to increase the service level (a cost-driver of penalty cost) and therefore also customer satisfaction as well customer retention. In particular, we were able to provide, based on open queueing network models, more insights into the complex relationship between lead time reduction and financial results. Additionally, we provided also some techniques to reduce lead time in a fast and efficient way, i.e., resource pooling as well as batch size optimization.
158
Dominik Gl¨aßer et al.
We demonstrated that it was possible to reduce the lead time within a specific empirical setting by 75% without changing the whole production layout or making high investments. In parallel we were able to reduce the overall costs by about 11.5%. The reason for this significant reduction is the current cost accounting approach used by calculating the optimal batch size according to resource costs (Scenario 1) without paying any attention to the lead time impacts. For companies, this approach will develop the capability to reduce their costs on the one hand and to increase customer satisfaction as well as revenue on the other hand. Nevertheless, some further research is necessary. A first step is to enhance the evaluation framework to a closed loop dynamic model that takes also into consideration customer satisfaction, customer retention, revenue, profit, investment into resources and impact on demand. The input of the system (capacity, demand, etc.) is sensitive to the output (performance measures) and vice versa. For instance, reduced lead times and high service levels can increase customer retention and have a positive (direct or indirect) effect on the future demand (Reiner, 2005). Such effects can also be considered in building more accurate demand forecasting models where demand is influenced by lead time (e.g., Yang and Geunes, 2007). We believe these results to be interesting for both academics (theoreticians of OM domain investigating the applicability and contribution of their theories – being parallel to our research interests – in real life problems) and practitioners (managers seeking applicable solutions from the literature) because an efficient evaluation framework is given. Acknowledgements This work is supported by the SEVENTH FRAMEWORK PROGRAMME – THE PEOPLE PROGRAMME – Marie Curie Industry-Academia Partnerships and Pathways Project (No. 217891) “Keeping jobs in Europe”.
References Askenazy P, Thesmar D, Thoenig M (2006) On the relation between organisational practices and new technologies: the role of (time-based) competition. Economic Journal 116(508):128–154 Beamon B (1999) Measuring supply chain performance. International journal of operations and production management 19(3):275–292 Bodie Z, Kane A, Marcus A (2003) Essentials of investments, 5th edn. McGraw-Hill Irwin, New York Bolch G, Greiner S, Meer H, Trivedi K (2006) Queueing networks and Markov chains: modeling and performance evaluation with computer science applications. John Wiley and Sons, New Jersey Christopher M, Towill D (2000) Supply chain migration from lean and functional to agile and customised. Supply Chain Management: An International Journal 5(4):206–213
Evaluation of the Dynamic Impacts of Lead Time Reduction
159
Croom S (2009) Introduction to research methodology in operations. In: Karlsson C (ed) Researching operations management, 1st edn, Routledge, New York Ghalayini A, Noble J (1996) The changing basis of performance measurement. International Journal of Operations and Production Management 16(8):63–80 Govil M, Fu M (1999) Queueing theory in manufacturing: a survey. Journal of Manufacturing Systems 18(3):214–240 Gr¨unberg T (2004) Towards a method for finding and prioritising potential performance improvement areas in manufacturing operations. International Journal of Productivity and Performance Management 53(1):52–71 Hammel T, Phelps T, Kuettner D (2002) The re-engineering of Hewlett-Packard’s CD-RW supply chain. Supply Chain Management: An International Journal 7(3):113–118 Haque L, Armstrong M (2006) A survey of the machine interference problem. European Journal of Operational Research 179(2):469–482 Hill A, Collier D, Froehle C, Goodale J, Metters R, Verma R (2002) Research opportunities in service process design. Journal of Operations Management 20(2):189– 202 Hofmann P, Reiner G (2006) Drivers for improving supply chain performance: an empirical study. International Journal of Integrated Supply Management 2(3):214–230 Hopp W, Spearman M (2000) Factory physics: foundations of manufacturing management. McGraw-Hill-Irwin, New York c Journal of Johnson H (2006) Lean accounting: To become lean, shed accounting. cost management 20(1):6–17 Kaplan R, Norton D (1997) The balanced scorecard: translating strategy into action, 4th edn. Harvard Business School Press, Boston Karmarkar U (1987) Lot sizes, lead times and in-process inventories. Management Science 33(3):409–418 Karmarkar U, Kekre S, Kekre S (1985a) Lotsizing in multi-item multi-machine job shops. IIE transactions 17(3):290–298 Karmarkar U, Kekre S, Kekre S, Freeman S (1985b) Lot-sizing and lead-time performance in a manufacturing cell. Interfaces 15(2):1–9 Koo PH, Bulfin R, Koh S (2007) Determination of batch size at a bottleneck machine in manufacturing systems. International Journal of Production Research 45(5):1215–1231 Kuehn P, Siegen G, Siegen G (1979) Approximate analysis of general queuing networks by decomposition. IEEE Transactions Communicationson 27(1):113–126 Lee H (2002) Aligning supply chain strategies with product uncertainties. California management review 44(3):105–119 Li Z, Xu X, Kumar A (2007) Supply Chain Performance Evaluation from Structural and Operational Levels. Emerging Technologies and Factory Automation pp 1131–1140 Little J (1961) A proof for the queuing formula: L= λ W. Operations Research 9(3):383–387
160
Dominik Gl¨aßer et al.
Maskell B, Kennedy F (2007) Why do we need lean accounting and how does it work? Journal of Corporate Accounting & Finance 18(3):59–73 Mason-Jones R, Naylor B, Towill D (2000) Lean, agile or leagile? Matching your supply chain to the marketplace. International Journal of Production Research 38(17):4061–4070 Maynard R (2008) Lean accounting. Financial Management pp 43–46 Van der Merwe A (2008) Debating the principles: Asking questions of lean accounting. Cost Accounting pp 29–36 Nahmias S (2005) Production and operations analysis. McGraw-Hill Irwin, Boston Naylor B, Naim M, Berry D (1999) Leagility: integrating the lean and agile manufacturing paradigms in the total supply chain. International Journal of Production Economics 62(1-2):107–118 Neely A (1999) The performance measurement revolution: why now and what next? International Journal of Operations and Production Management 19:205–228 Neely A, Mills J, Platts K, Richards H, Gregory M, Bourne M, Kennerley M (2000) Performance measurement system design: developing and testing a processbased approach. International Journal of Operations and Production Management 20(10):1119–1145 Rabta B (2009) A review of decomposition methods for open queueing networks. In: Reiner G (ed) rapid modelling for increasing competitiveness: tools and mindset, Springer, London Rabta B, Reiner G (2010) Batch size optimization by means of evolutionary algorithms and queuing network analysis. University of Neuchˆatel, working paper Rabta B, Alp A, Reiner G (2009) Queueing networks modelling software for manufacturing. In: Reiner G (ed) Rapid modelling for increasing competitiveness: tools and mindset, Springer, London Reiner G (2005) Customer-oriented improvement and evaluation of supply chain processes supported by simulation models. International journal of production economics 96(3):381–395 Reiner G (2009) Rapid modelling for increasing competitiveness: tools and mindset. Springer, London Reiner G, Hofmann P (2006) Efficiency analysis of supply chain processes. International Journal of Production Research 44(23):5065–5087 Santos S, Belton V, Howick S (2002) Adding value to performance measurement by using system dynamics and multicriteria analysis. International journal of operations and production management 22(11):1246–1272 Shanthikumar J, Ding S, Zhang M (2007) Queueing theory for semiconductor manufacturing systems: A survey and open problems. IEEE Transactions on Automation Science and Engineering 4(4):513–522 Silver E (2004) Process management instead of operations management. Manufacturing & Service Operations Management 6(4):273–279 Silver E, Pyke D, Peterson R, et al (1998) Inventory management and production planning and scheduling. Wiley New York Slack N, Lewis M (2007) Operations strategy, 2nd edn. Prentice Hall international, Harlow
Evaluation of the Dynamic Impacts of Lead Time Reduction
161
Stalk J, Hout T (1990) Competing against time: how time-based competition is reshaping global markets. Free Press, New York Suri R (1998) Quick response manufacturing: a companywide approach to reducing lead times. Productivity Pr Suri R, Sanders J, Kamath M (1993) Performance evaluation of production networks. In: Kan S, Zipkin P (eds) Logistics and production inventory (Handbooks oin operations research and management science), vol 4, Elsevier Science Publishers B.V., Amsterdam de Treville S, van Ackere A (2006) Equipping students to reduce lead times: The role of queuing-theory-based modeling. Interfaces 36(2):165 de Treville S, Shapiro R, Hameri A (2004) From supply chain to demand chain: the role of lead time reduction in improving demand chain performance. Journal of Operations Management 21(6):613–627 de Treville S, Hoffrage U, Petty J (2009) Managerial decision making and lead times: the impact of cognitive illusions. In: Reiner G (ed) Rapid Modelling for Increasing Competitiveness: Tools and Mindset, Springer, London Vaughan T (2004) Lot size effects on process lead time, lead time demand, and safety stock. International Journal of Production Economics 100(1):1–9 Whitt W (1983) The queueing network analyzer. Bell System Technical Journal 62(9):2779–2815 Yang B, Geunes J (2007) Inventory and lead time planning with lead-time-sensitive demand. IIE Transactions 39(5):439–452 Zheng P, Lai K (2008) A rough set approach on supply chain dynamic performance measurement. Springer-Verlag, Berlin Zipkin P (1986) Models for design and control of stochastic, multi-item batch production systems. Operations Research 34(1):91–104
The Financial Impact of a Rapid Modeling Issue: the Case of Lot Sizing Lien G. Perdu and Nico J. Vandaele
Abstract The purpose of this paper is to convince the reader of the usefulness of an integrated financial-operational model and simultaneously help to understand the complex relationships between operational decisions and their influence on the bottom line of the company. The problem is, that many operational models optimize operational performance measures instead of financial ones. Even when they optimize a financial objective function, merely, cost of capital, interest or taxes are not taken into account. That is why we build an integrated model that takes into account the cost of all capital. Key words: integrated operational-financial model, lot sizing, queueing, stochastic optimization
1 Introduction and Literature Overview This paper focuses on integrated operational-financial models. We choose to focus on one specific operational model which is the stochastic lot sizing problem. This problem is typically used for midterm decision making. Decision variables of interest are lot size and overtime. For now we are working with the single product single server case in order to catch insight. Later on, the multi-server multi-product case should give additional insights. Setup times, process times and interarrival times are stochastic variables and individual arrivals are considered instead of batch arrivals. Lien G. Perdu (B) Dept of Business and Economics, K.U. Leuven, Naamsestraat 69, BE-3000 Leuven, Belgium, e-mail:
[email protected] Nico J. Vandaele Dept of Business and Economics, K.U. Leuven Campus Kortrijk, E. Sabbelaan 53, BE-8500 Kortrijk, Belgium, e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 12,
163
164
Lien G. Perdu and Nico J. Vandaele
The choice for our financial objective function should be clarified. Nowadays, the priority for firms is to maximize shareholder’s value and to have an effective communication tool for contacts with the capital market (Guill´en et al, 2006; Young and O’Byrne, 2001). It is in times of economic downturn, that the effects of shorter lead times and increasing customer service on the bottom line of the company gain importance. Different financial measures exist for measuring shareholder value. We make a distinction between the flow measures and the stock measures. Flow measures measure the creation of shareholder value during a certain period in time. Stock measures measure the value of the company at a certain moment in time, it is in fact a snapshot. A well known measure is net present value which measures the present value of future cash flows minus the initial investments. Unfortunately, it is a stock measure and it is not useful to measure the value over the period of one year. EVA (economic value added) seems to be the most applicable measure for our purposes (Perdu and Vandaele, 2010). This performance measure implements the idea of revenues covering not only all operating costs but also all capital costs (including the cost of equity finance). EVA is midterm oriented and therefore is able to capture the effects of lot size changes or changes in overtime, which are the two tactical level decisions we wish to study. Large capacity extensions are long term oriented and will not be considered for the moment. Furthermore, for this paper, a single period model is considered, which means that average monthly demand does not change. As far as we know, little is available in literature on a joint combination of queueing models and the maximization of shareholder’s wealth. For this literature overview we focus on the one hand on operational queueing models with pure operational objectives as well as cost-related objectives. On the other hand some papers are presented that include shareholder’s wealth as objective function on top of an operational model (other than a queueing model). This overview ends with some relevant literature concerning lead time reduction. The first stream of papers deals with operational queueing models combined with batching. Tactical-level decisions such as lot size decisions and decisions concerning overtime (Govil and Fu, 1999; Gunasekaran et al, 2001) are interesting in such models because they determine operational (and financial) performance and it is possible to implement changes in the mid-term. Pioneering work can be found in Karmarkar et al (1983, 1985a,b, 1986). Other examples including extensions or relaxations on the arrival process can be found in Lambrecht et al (1996); Lambrecht and Vandaele (1996); Tielemans and Kuik (1996); Enns and Li (2004); Lambrecht et al (1998); Zipkin (1986). A second stream of papers on operational queueing models includes costs or profits. The importance of lead time related costs, which are often ignored in traditional lot-sizing models, is emphasized in Karmarkar (1987) who studies the relationships between lot sizes, lead times and work-in-process for batch manufacturing shops with queues. Operational queueing models that include costs and profit parameters can be found in Bertrand (1985); Zipkin (1986); Missbauer (2002); Hopp et al (2002); Williams (1984); Enns and Choi (2002); Enns and Li (2004).
The Financial Impact of a Rapid Modeling Issue: the Case of Lot Sizing
165
The following three papers discuss operational models combined with the maximization of shareholder value. Guillen et al (2007) derive an integrated model for supply chain management which includes budgetary constraints as well as process operations. The authors choose to optimize the change in equity in the integrated model. This is because maximizing the change in equity directly enhances the shareholder’s value. Yi and Reklaitis (2004) present a production planning model that simultaneously takes into account production and financing constraints for a batchstorage network. A key assumption is that there is some limited cash availability assumed. The objective was to minimize the opportunity costs of annualized capital investment and cash/material inventory minus stockholder benefits. Taking into account the financial constraints decreased the optimal lot and storage sizes. Another suiting example can be found in Hahn and Kuhn (2009)). Their working paper focuses on sales and operations planning. Important is that they include the financial flows and the impact on shareholder value creation which is often omitted. The authors optimize the Economic Value Added on top of a planning model. Lead time reduction is key in our research. A short literature overview of the effects of a shorter lead time is thus in place. Short lead times are a major source of competitive advantage and impact customer satisfaction (Kenyon et al, 2005; Kuik and Tielemans, 1998, 2004). The consequences of the way in which leadtime reduction is realized, the external consequences and the internal consequences can be found in Wouters (1991) and Kenyon (1997). Ray and Jewkes (2004), stress the interdependence between demand, price and delivery time. The paper is organized as follows. We will first discuss Figs. 1 and 2 that illustrate the problem we are working on. Next a case study will provide some numerical examples on the implications of the integrated model. Finally, we will highlight the key findings of our research.
2 The Problem Setting The integrated operational-financial model is shown in Fig. 1. The figure is a summary of the relationships that exist between some operational parameters and financial objectives. The full lines present a positive correlation. The dotted lines present a reverse effect: if one side is increasing, the other side is decreasing. The upper part of the figure indicates the key components of the EVA performance measure. These are net sales on the one hand and capital charge, fixed costs and variable costs on the other hand. The two operational decision variables, lot size and overtime, can be found in the two dark-colored rectangles of the figure. Some operational relationships that can be found in the lower left corner of the figure, need further attention. Cycle time, throughput and work-in-process inventory are related to each other through Little’s Law. The decisions on lot size and overtime will determine cycle time, throughput and work-in-process. Lot size does not just increase waiting time and therefore increases cycle time, but it also decreases setup time and thus decreases cycle time. The effect of lot size on cycle time has to be
166
Lien G. Perdu and Nico J. Vandaele
determined depending on the parameters. Overtime influences bottleneck capacity and thus cycle time and work-in-process. We try to show how the operational parameters influence the EVA starting with the variable “lot size”. Lot size influences the holding cost through inventory and the setup cost, which in their turn influence the variable cost. The higher the variable cost, the lower the margin and the EVA. But if we go in the opposite direction, one can imagine that lot size influences also lead time, which in turn influences demand or market price. Depending on the industry, customers can be willing to pay a little bit more for a product that has a relative shorter lead time compared to the average lead time in industry. Or, demand can increase as a consequence of shorter lead times because this may increase customer satisfaction (Wouters, 1991). This influences the margin and economic value added in a positive way.
Fig. 1 Relationship diagram
Overtime equally influences some financial parameters. Overtime is directly linked to labor cost which increases the variable cost. Overtime also influences the lot size: as overtime is increasing, the lot size can increase because more “capacity” is available. Because we are working in a midterm time horizon, we assume capacity to stay fixed. This means that the fixed costs stay fixed because these fixed costs relate to depreciation. The main conclusion from this diagram is that from a financial point of view, lot size and overtime can be seen as two important cost drivers. Other interesting variables could be subcontracting and small capacity extensions. In the future we
The Financial Impact of a Rapid Modeling Issue: the Case of Lot Sizing
167
Fig. 2 The influence of operational and financial decisions on the profit and loss and balance sheet elements
would like to integrate them into our model. These operational parameters influence each other in a complex way which makes it difficult to find the optimal values. We stress the importance of including the relevant elements of the balance sheet and the profit and loss accounts in the model. It is important to take into account the cost of all capital. This means that we should also ask for a return on the working capital because it is part of the invested capital. Working capital equals cash plus inventories plus accounts receivable minus accounts payable. This is closely linked to the cash-to-cash cycle of the company. This is the time between the moment that raw material is paid to the supplier and the moment that the customer pays for the product. The longer the lead time, the longer the cash-to-cash cycle and the more money gets stuck in working capital requirements. A second figure shows the influences of operational decisions as well as financial decisions on the elements of the balance sheet on the one hand and the elements of the profit and loss accounts on the other hand. On the right hand side, the different mid-term operational decisions are shown. The arrows show what elements of the balance sheet and the profit & loss accounts are influenced by these decisions. Next to the operational decisions, there are some financial decisions that can be taken. The financial decisions are decisions on factoring, early payment, loan and investment. These are not yet integrated in our model but we would like to integrate them in the future. The only thing that is included is that excess cash (this is the cash that exceeds a certain minimum amount) will be
168
Lien G. Perdu and Nico J. Vandaele
invested for the period of one month. Decisions considering loans also influence the cost of capital of the firm through the Weighted Average Cost of Capital (WACC). WACC is calculated using the following formula: debt × cost of debt × (1 − tax rate) equity × cost of equity + . total financing total financing There is also an arrow pointing at net sales. This is due to the fact that the operational decisions influence the lead time, which in turn influences the price of the product. This was already explained in the previous section. A last thing to notice is that we are working in the midterm which means that we will not make any large capacity additions. However, in the future this may be one of the relevant decisions to make, and consequently, this will influence depreciation and fixed assets.
3 The Integrated Operational-Financial Model The elaboration of the theoretical model can be found in Perdu and Vandaele (2010). A summary of the most important issues is given in this section. For the derivation of the formulas, we refer to the paper. The starting point of the model is the operational model of Lambrecht et al (1996). The authors derive a general approximation for the single product lot-sizing model with queuing delays in a make-to-order environment. It is a stochastic environment which means that setup times, process times and interarrival times are stochastic. The lead time consists of the collecting of the individual arrivals in a batch, the waiting time of the complete batches in queue, the setup time of a batch and the processing time of the individual products. Once a product is ready, it can leave the system. The expected waiting time in the GI/G/1 queue is approximated by means of existing approximate results of Whitt (1983). The decision variable in this model is the lot size. The model finds the optimal lot size that minimizes the average expected lead time. This model is extended in Perdu and Vandaele (2010) with a new decision variable: overtime. An extension allows us to capture a broader effect of operational decision making on the financial bottom line. Next, we added a new objective function to this elaborated operational model. The goal is to maximize the EVA over the period of one year with time lags of one month. EVA is best suited for our purposes. The advantages of this performance measure are that it takes into account the cost of all capital and that it compels management to generate returns on working capital. EVA can be used for mid-term performance measuring, which fits well with the tactical level decisions we wish to study. EVA equals net operating profit after taxes minus the invested capital times the cost of capital (WACC). The operational and financial constraints can be found in the paper. We should however elaborate on how we modeled price in our model.
The Financial Impact of a Rapid Modeling Issue: the Case of Lot Sizing
169
We model price as a function of the lead time. Following Kenyon, we assume that customers may be willing to pay more for the same product if the lead time is shorter. In practice, this is done by comparing the lead time of the company with the industry lead time. If the company can offer the product in a time that is shorter than the industry lead time, the company can ask a higher price for the product. The sensitiveness of the customers is obviously dependent on the industry. That is why we introduce a sensitiveness parameter to indicate the sensitiveness of the customers to short lead times. This sensitiveness parameter has a value between zero and one. If it is zero, the customers are not at all sensitive to lead times; it does not matter whether or not the lead time is short. The other extreme is a sensitiveness parameter equal to one, which means that having short lead times strongly influences customer behavior. The incorporation of lead time into the objective function allows us to account for cost reductions due to lead time reduction. The integrated model explicitly takes into account the costs of work-in-process inventory. This is in accordance to Bertrand (1985) who states that the cost for a higher work-in-process should be accounted for. If not, substantial errors in both batch size and cost can occur. Kuik and Tielemans (2004) also stress the importance of including work-in-process inventory in cost functions. A last remark about the model is that we assume that the setup cost only includes true costs such as cleaning of the machines, consumables, tests on product quality, . . . Karmarkar (1987) argues that the fixed cost component of setups should be distinguished from the opportunity cost of lost production capacity. That is why we explicitly do not include a time component within the setup cost. The opportunity cost is indirectly modeled because an increase in setup time induces an increase in lead time which results in a lower price of the product. This all influences the EVA in a negative way. This model can be solved in C++ using a steepest ascent method. If the EVA turns out to be positive, the company has earned more after-tax operating income than the cost of invested capital.
4 Case Study For the case study, an analysis will be made using artificial data. The objective is to find relevant relationships between the parameters and the EVA. The operational data are based on Lambrecht et al (1996). These parameters are set as follows. Table 1 Operational data (times are expressed in hours) Average Variance SCV
Interarrival time 1.0 0.5 0.5
Setup time 10 10 0.1
Processing time 0.5 0.0625 0.25
170
Lien G. Perdu and Nico J. Vandaele
Based on these pure operational data, the average lead time can be minimized using the steepest descent method. This results in an optimal batch size of 24.23 and a corresponding minimized average lead time of 29.22 hours. The adapted traffic intensity at the optimum equals 91.2%. The financial data are given in Table 2. Table 2 Financial data Cost parameters Price (for an average lead time equal to β ) = 85 m.u. Unit Raw material cost = 35 m.u. Unit Setup cost = 100 m.u. Inventory holding cost per unit per time unit = 12 m.u. Monthly labor cost (for φ = 1) = 22000 m.u. Monthly depreciation = 1000 m.u. Fixed assets at time zero = 60000 m.u. Cash at time zero = 2000 m.u. Minimum cash in stock = 2000 m.u. Cash in stock at time zero = 2000 m.u. Other parameters Weighted Average Cost of Capital = 0.06 Tax rate = 0.4 Average lead time in industry = 0.05 month Sensitiveness of the industry = 0.1
If we use the batch size of 24.23 as input for the integrated model together with a φ equal to one, we get an EVA of 68 047 m.u. Maximizing the EVA as a function of the batch size and overtime gives us the following results shown in Table 3. These results are compared with the results of the pure operational model of Lambrecht et al (1996). Table 3 Results of the optimization EVA Average lead time Average waiting time for the batch in queue Occupation rate Batch size Overtime ()
Optimizing Lead Time 68047 m.u. 29.22 hours 1.28 hours 91.2% 24.23 1
Optimizing EVA 99424 m.u. 48.39 hours 1.67 hours 95.4% 42.82 0.69
As the EVA is maximized, it increases to 99 424 m.u. The average lead time is however larger (48 hours) compared to the average lead time if lead time is minimized (29 hours). This means that it is more important to have less overtime (0.69 instead of 1) than to have a short lead time in this industry. As overtime is reduced, the batch size should be larger and so it has increased to 42 in this specific example.
The Financial Impact of a Rapid Modeling Issue: the Case of Lot Sizing
171
We can conclude that it is relatively cheaper to have a larger lead time combined with less overtime than the other way around. A second numerical example deals with the sensitiveness parameter β . This parameter indicates the sensitiveness of the industry to short lead times. The larger β , the more sensitive the customers in the industry are towards fast deliveries. Table 4 presents the optimal results for three different values of β . Table 4 EVA as a function of β
β = 0.02 β = 0.1 β = 0.9
EVA 120243 m.u. 99424 m.u. 149058 m.u.
Lot Size 73.78 42.82 21
Average Lead Time 80.51 hours 48.39 hours 25.50 hours
Overtime 0.59 0.69 1.21
As β is increasing, the average lead time is decreasing, which is achieved by increasing overtime and decreasing the lot size. This is in line with the expectations that we had concerning the model, namely that a larger sensitiveness leads to a lower lead time and the other way around. The holding cost for the work-in-process is 12 m.u. per product per time unit. Table 5 shows the results if we let this holding cost vary between 1 and 100. We expect that a higher unit holding cost for work-in-process will lead to less work-inprocess. This can be achieved by a smaller batch size. Table 5 EVA as a function of Ch EVA Ch = 1 m.u. 103100 m.u. Ch = 12 m.u. 99424 m.u. Ch = 100 m.u. 72450 m.u.
Lot Size 43.91 42.82 36.76
Average Lead Time 49.52 hours 48.39 hours 42.08 hours
Overtime 0.68 0.69 0.74
As the work-in-process holding cost is increasing, the average lead time is decreasing. This follows directly from Little’s law: average work-in-process equals average input times average lead time. To reduce the lead time, and the work-inprocess, the lot size should be reduced and consequently, overtime should be increased to allow smaller batch sizes. The weighted average cost of capital is currently set at 6%. The WACC is a consequence of how the company is financed and it can be seen as an opportunity cost. If we set the WACC equal to 12% we get following changes. An increase of the WACC directly leads to a decrease of the EVA because the cost of capital is increasing. It changes to 87851 m.u.
172
Lien G. Perdu and Nico J. Vandaele
5 Conclusion This paper shows the usefulness of an integrated-financial operational model using a case study. The optimal outcome of operational parameters can only be found using an integrated model instead of a pure operational or a pure financial model. We find different lot sizes that minimize lead time and lot sizes that maximize EVA. Furthermore, the optimal lot sizes depend on the parameters of the company and the industry. We choose to maximize the EVA rather than profits because the final goal of the firm is to maximize shareholder’s value. This work is focused on the single product single server case. We intend to extend this research to the multi product multi server case to find additional insights.
References Bertrand J (1985) Multiproduct optimal batch sizes with in-process inventories and multi work centers. IIE Transactions 17(2):157–163 Enns S, Choi S (2002) USE of GI/G/1 queuing approximations to set tactical parameters for the simulation of MRP systems. In: Proceedings of the 34th conference on Winter simulation: exploring new frontiers, Winter Simulation Conference, pp 1123–1129 Enns ST, Li L (2004) Optimal lot-sizing with capacity constraints and autocorrelated interarrival times. In: WSC ’04: Proceedings of the 36th conference on Winter simulation, Winter Simulation Conference, pp 1073–1078 Govil M, Fu M (1999) Queueing theory in manufacturing: A survey. Journal of manufacturing systems 18(3):214–240 Guill´en G, Badell M, Espuna A, Puigjaner L (2006) Simultaneous optimization of process operations and financial decisions to enhance the integrated planning/scheduling of chemical supply chains. Computers and Chemical Engineering 30(3):421–436 Guillen G, Badell M, Puigjaner L (2007) A holistic framework for short-term supply chain management integrating production and corporate financial planning. International Journal of Production Economics 106(1):288–306 Gunasekaran A, Patel C, Tirtiroglu E (2001) Performance measures and metrics in a supply chain environment. International Journal of Operations and Production Management 21(1/2):71–87 Hahn G, Kuhn H (2009) Value-based supply chain planning: optimizing for superior financial performance. Working paper (Private Communication) Hopp W, Spearman M, Chayet S, Donohue K, Gel E (2002) Using an optimized queueing network model to support wafer fab design. IIE Transactions 34(2):119–130 Karmarkar U (1987) Lot sizes, lead times and in-process inventories. Management Science 33(3):409–418
The Financial Impact of a Rapid Modeling Issue: the Case of Lot Sizing
173
Karmarkar U, Kekre S, Kekre S (1983) Multi-item lot sizing and manufacturing leadtimes. Graduate School of management, Working paper, University of Rochester, Rochester, New York (QM8325) Karmarkar U, Kekre S, Kekre S (1985a) Lotsizing in multi-item multi-machine job shops. IIE transactions 17(3):290–298 Karmarkar U, Kekre S, Kekre S, Freeman S (1985b) Lot-sizing and lead-time performance in a manufacturing cell. Interfaces 15(2):1–9 Karmarkar U, Kekre S, Kekre S (1986) Multi-item batching and minimization of queueing delays (QM8325) Kenyon G (1997) A profit-based lot-sizing model for the ntjob, m-madnine job shop: Incorporating quality, capacity, and cycle time. PhD thesis, Texas Tech University Kenyon G, Canel C, Neureuther B (2005) The impact of lot-sizing on net profits and cycle times in the n-job, m-machine job shop with both discrete and batch processing. International Journal of Production Economics 97(3):263–278 Kuik R, Tielemans P (1998) Analysis of expected queueing delays for decision making in production planning. European Journal of Operational Research 110(3):658–681 Kuik R, Tielemans P (2004) Expected time in system analysis of a singlemachine multi-item processing center. European Journal of Operational Research 156(2):287–304 Lambrecht M, Vandaele N (1996) A general approximation for the single product lot sizing model with queueing delays. European Journal of Operational Research 95(1):73–88 Lambrecht M, Chen S, Vandaele N (1996) A lot sizing model with queueing delays: The issue of safety time. European journal of operational research 89(2):269–276 Lambrecht M, Ivens P, Vandaele N (1998) A capacity and lead time integrated procedure for scheduling. Management Science 44(11):1548–1561 Missbauer H (2002) Lot sizing in workload control systems. Production Planning & Control 13(7):649–664 Perdu L, Vandaele N (2010) The stochastic lot sizing problem from a financial perspective Ray S, Jewkes E (2004) Customer lead time management when both demand and price are lead time sensitive. European Journal of Operational Research 153(3):769–781 Tielemans P, Kuik R (1996) An exploration of models that minimize leadtime through batching of arrived orders. European Journal of Operational Research 95(2):374–389 Whitt W (1983) The queueing network analyzer. Bell System Technical Journal 62(9):2779–2815 Williams T (1984) Special products and uncertainty in production/inventory systems. European Journal of Operational Research 15(1):46–54 Wouters M (1991) Economic evaluation of leadtime reduction. International Journal of Production Economics 22(2):111–120 Yi G, Reklaitis G (2004) Optimal design of batch-storage network with financial transactions and cash flows. AIChE journal 50(11):2849–2865
174
Lien G. Perdu and Nico J. Vandaele
Young S, O’Byrne S (2001) EVA and value based management: a practical guide to implementation. McGraw-Hill Zipkin P (1986) Models for design and control of stochastic, multi-item batch production systems. Operations Research 34(1):91–104
Part V
Product and Process Development
A Flexibility Based Rapid Response Model in Ready to Wear Sector, in Turkey M¨ujde Erol Genevois and Deniz Yensarfati
Abstract In this study, a decision making model is developed by combining fuzzy analytic hierarchy process (FAHP) and quality function deployment (QFD). The purpose of this article is to provide a quick solution to deal with the uncertainties and the risks of the Turkish ready to wear textile sector. In order to accomplish this purpose, customer needs are detailed and ranked using FAHP, and then a two-staged QFD is applied. In here, a house of flexibility is constructed to relate customer needs with the management and manufacturing flexibility levers. Later on, a second house is built to detail down the flexibility levers into system factors. The application of this combined approach is interrogated in an international women’s ready-to-wear firm based in Turkey which targets high consumer segment. Key words: flexibility management, QFD, house of flexibility
1 Introduction In today’s fast changing business environment manufacturing enterprises face severe competition. While global markets enable the purchase of any supply or end product from any distant producer, customers demand faster response to requests, shorter lead times, higher product and service quality. Thus, customers are less predictable in their behavior of purchasing and all of these issues lead to higher uncertainty and variability for manufacturers. In order to handle or mitigate the effects of these problems, firms are supposed to be flexible on their processes and they should develop M¨ujde Erol Genevois (B) and Deniz Yensarfati Industrial Engineering Department, Galatasaray University, Ciragan Cad. No: 36 Ortakoy, Istanbul, Turkey, e-mail:
[email protected] Deniz Yensarfati e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 13,
177
178
M¨ujde Erol Genevois and Deniz Yensarfati
abilities to adapt to internal and external changes quickly. Flexibility is a necessity for them to remain competitive and profitable. The textile sector is a good example for such an environment where internal and external changes occur frequently, uncertainty and risks always exist and flexibility can be proposed as a solution to handle all of these issues. When the Turkish textile sector is investigated, total export volume and the readyto-wear industry’s share in this volume have increased significantly in recent years. Many Turkish manufacturers aim to exist in several markets with their own brands. On global terms, China, Taiwan, India, Pakistan and Turkey constitute 15% of the ¨ ut, 2003). For these reasons, both the world’s textile industry (Kanoglu and Ong¨ academic and business world are interested in this industry. In order to sustain the growth of the sector in the global markets, several approaches and strategies have been proposed such as market entrance, existence and growth strategies. The purpose of this study is to aid the accurate evaluation of flexibility and to make sure that the best flexibility portfolio choice is made to respond to the customer needs. When a literature review is conducted, it is seen that both qualitative and quantitative methods have been applied for flexibility valuation. In this article, Fuzzy Analytical Hierarchy Process, a widely used multi-criteria decisionmaking method that integrates group decision making and fuzziness, and the Quality Function Deployment method are combined. The application of this combined approach is interrogated in an international women’s ready-to-wear firm based in Turkey which targets the high consumer segment. The first section of this article is related to the concept of flexibility. A literature review is conducted regarding flexibility management; various flexibility definitions and types are provided. Then the flexibility levers which are also used in the application are defined. These levers are grouped under two categories: management flexibilities and manufacturing flexibilities. The second part of the study consists of the QFD approach which is a structured method for defining customer needs and transforming those into strategic plans. This methodology is investigated for both its advantages and disadvantages. In the application, FAHP is applied to rank the importance of consumer needs with respect to flexibility levers, then a two phased QFD approach is used where house of flexibility is utilized to- match these customer needs with flexibility capabilities the firm can acquire. Lastly, the results of the application are presented and discussed.
2 The Concept of Flexibility In literature, the concept of flexibility is investigated extensively and it has been defined on several studies. Mascarenhas (1981) is noted as one of the oldest studies on this topic and flexibility is defined to be the ability of a manufacturing system to cope with environmental variations. Gerwin (1987) states it as ability to respond effectively to changing circumstances. Later, Cox (1989) includes the time aspect into the flexibility concept and describes it as quickness and ease with which plants can
A Flexibility Based Rapid Response Model in Ready to Wear Sector, in Turkey
179
respond to changes in market conditions. In short terms, flexibility means the ability to adapt to changing conditions using the existing set and amount of resources; however, in long terms, it measures the ability to introduce new products, new resources and production methods, and to integrate these into the existing production system (Olhager, 1993). Sethi and Sethi (1990) explain flexibility as the adaptability of a system to a wide range of possible environments. Then, Ramasesh and Jayakumar (1991) include the financial aspect, defining flexibility as the ability of a manufacturing system to generate high net revenues consistently across all conceivable states of the nature, in which it may be called to function. Nagarur (1992) expresses flexibility as the ability of the manufacturing system to cope with changes such as product, process, load, and machine breakdown. Gupta and Goyal (1992) define it as the ability to cope with changing circumstances or instability caused by the environment. Newman et al (1993) characterize flexibility as a response to external uncertainty. Hyun and Ahn (1992) propose to divide the external dimension into proactive and reactive strategies. An adjustment or a response is described proactive when the firm uses the knowledge to impose changes in the environment, such as: responding to customer requests efficiently by incorporating its supplier’s new technology to add value to the product portfolio. On the other hand, an adjustment or a response is said to be reactive when the firm copes with changes imposed in the environment by external forces, such as incorporating a new feature to its product soon after a competitor does (Bernardes and Hanna, 2009). From another perspective, Upton (1994) splits flexibility into two: internal flexibility and external flexibility. In this study, internal flexibility is defined as what the firm can do (competencies) and external flexibility is stated as what the customer sees (capabilities). Gerwin (1993) represents flexibility in four strategies: “adaptive” (defensive or reactive use to accommodate unknown uncertainty); “redefinition” (proactive use to raise customer expectations and gain competitive edge); “banking” (defensive use to accommodate known types of uncertainty); and “reduction” (the use of long term contracts, total quality management). Wiendahl and Hemandez (2002) partition the flexibility concept into two: modifiability and versatility. Modifiability requires the adaptation of production systems to changing environment needs by changing the structure, character and number of resources of the production system. Versatility describes only the adaptation of production systems within the given available resources and organizational structure. Bernardes and Hanna (2009) state that flexibility is a change management issue which seeks proactive solutions to expected situations. Genevois and G¨urb¨uz (2009) view flexibility as the capability of adaptation to change. On their terms, it is a firm’s strategic asset not only to adapt the changes in the environment but also to leverage the environment for better performance. There exist several studies on differentiating certain concepts from flexibility. Backhouse and Burns (1999) and Wadhwa and Rao (2003) conclude that agility is dealing with unknown situations whereas flexibility corresponds to managing known issues. On the other hand, Stanev et al (2008) state that elasticity; agility, adaptability and sensitivity are synonyms for flexibility. In light of the literature, a general definition of the flexibility is provided as the system’s capability of adaptation to change in a wide range of possible environments.
180
M¨ujde Erol Genevois and Deniz Yensarfati
2.1 Necessity for Flexibility Today, the global business environment is shifting rapidly; firms cannot survive in the market unless they respond to internal and external changes quickly. Intense foreign competition, rapid technological developments, mass-production capabilities, shorter product life cycles and lead times force the manufacturing firms to be flexible in all their processes. Furthermore, customers are less predictable in their behavior of purchasing (Chandra et al, 2005). They expect the utmost from the suppliers: low cost, high quality, low defect rate, high product variety, on-the-spot delivery and maintenance without irritants. Many researches propose flexibility as a solution to adapt to these conditions and gain competitive advantage in the market. Viswanadham and Srinivasa Raghavan (1997) propose flexibility as a tool to cope with uncertainties such as human and machine resource variations; design and demand changes for products; technological innovation such as implementing a new hardware or software; socio-political changes like deregulation. Regarding how flexibility can increase the efficiency of the supply chain Zhang et al (2002) assert that flexibility should be established throughout the value chain of the manufacturing firm. As a result, firms can introduce new products quickly, support rapid product customization, shorten manufacturing lead times and costs for customized products, improve supplier performance, reduce inventory levels, and deliver products in a timely manner (Zhang et al, 2003). As seen from the customer expectations’ perspective Genevois and G¨urb¨uz (2009) list the reasons firms should be flexible such as: the need to make design changes quickly, when competitors introduce new models and customers start switching supply sources. They should focus on volume flexibility, when large customers reduce inventories and their demand rates become volatile. More flexible product mixes should be applied when importers or domestic competitors start offering multiple quality and price levels. Companies should respond quickly and supply the new products/services when the customer’s tastes change quickly. Flexibility is needed to satisfy the customer demand with respect to on-time delivery to the right location in required quantity of the right mix of products with the most suitable price. Regarding financial aspects and profitability, Hill (1995) provides detailed information), effective manufacturing management is not just about technology management, it is configuring the entire manufacturing system to increase the firm’s competitiveness and net profit. There are several studies namely Swamidass and Newell (1987) and Vickery et al (1997) which find significant positive relationships between manufacturing flexibility and financial performance and Gupta and Somers (1996) that finds significant positive relationships between manufacturing flexibility and growth performance, which also exists in Vickery et al. Vickery et al (1997).
A Flexibility Based Rapid Response Model in Ready to Wear Sector, in Turkey
181
2.2 Types of Flexibility In literature there are many studies regarding how flexibility is categorized and detailed down into many types. In this section, a brief summary of the earlier views is provided for background information. Following this literature survey, the frequently mentioned flexibility types are defined comprehensively from prior studies. Slack (1988) models flexibility in two layers: system and resource. System flexibility refers to the manufacturing tasks in terms of product, mix, volume and delivery flexibility. Resource flexibility corresponds to different groups of flexibility which facilitate manufacturing tasks. Combination of the studies conducted by Browne et al (1984), Gupta and Goyal (1989) and Sethi and Sethi (1990) points out eleven types of flexibility: machine, material handling, operation, process, product, routing, volume, expansion, program, production and market flexibility. In Beach et al (2000) analysis, the first three of the eleven types are considered as basic system components and the remaining eight types apply to the manufacturing system as a whole. This study provides a diagrammatic interrelationship of these flexibilities. In summary it is stated that flexibility can be classified according to how it is perceived (internal vs. external) and over what time period it is considered (long term vs. short term). Su´arez et al (1991) propose a matrix structure where four flexibility types (mix, new product, volume and delivery time) are matched according to need factors (product strategy, competitor behavior, product demand characteristics, and product life cycle) and source factors (production technology, production management techniques, human resources, relationship with subcontractors, suppliers and distributor relationships, product design, and accounting and information systems). Upton (1994) emphasizes three generic elements that affect flexibility type: range (scope of the flexibility dimension), mobility (ability to transit within the range) and uniformity (the indifference in performance of possible locations within the range). Benjaafar and Ramakrishnan (1996) state that manufacturing system flexibility depends on product and process flexibilities. Product flexibility is split into operation, sequencing and processing flexibility. Process flexibility is drilled down to processor, mix, volume layout and component flexibility. Koste and Malhotra (1999) define four elements of flexibility: range-number (R-N), range-heterogeneity (R-H), mobility (M), and uniformity (U). Ten flexibility dimensions (machine, labor, material handling, routing, operation, expansion, volume, mix, new product, modification) mainly discussed in literature are matched with these four elements. de Treville et al (2007) define three layers of flexibility: Strategic Flexibility (how organizations perceive and interpret their environment), Tactical Flexibility (concerns defining and measuring flexibility, as well as the translation of flexibility at the strategic level into the technologies, systems, and structures required to realize such flexibility) and Operational Flexibility (being technically or theoretically capable of varying the process is only the first step toward achieving flexibility). Besides these categorizations, there are various studies mentioning several flexibility types. Table 1 provides a brief list to demonstrate the frequency of flexibility types in literature.
182
M¨ujde Erol Genevois and Deniz Yensarfati
Table 1 Flexibility types used in literature Flexibility Type Routing Machine
Process Product Volume
Expansion Operation Production Labor Material Handling Product-Mix
New Product Modification Delivery Time Delivery Program Marketing Market Structural Manufacturing System Design
Mentioned In: Browne et al (1984), Parker and Wirth (1999), Fogliatto et al (2003), Zhang et al (2003), Shuiabi et al (2005), Gong and Hu (2008) Browne et al (1984), Malhotra and Ritzman (1990), Gupta and Goyal (1992), NNandkeolyar and Christy (1992), Parker and Wirth (1999), Fogliatto et al (2003), Zhang et al (2003), Shuiabi et al (2005), Gong and Hu (2008) Browne et al (1984), Chen et al (1992), Parker and Wirth (1999), Fogliatto et al (2003), Shuiabi et al (2005) Browne et al (1984), Chen et al (1992), Parker and Wirth (1999), Fogliatto et al (2003), Shuiabi et al (2005), Gong and Hu (2008) Browne et al (1984), Chen et al (1992), Viswanadham and Srinivasa Raghavan (1997), Parker and Wirth (1999), Fogliatto et al (2003), Zhang et al (2003), Shuiabi et al (2005), Erol and G¨urb¨uz (2009) Browne et al (1984), Parker and Wirth (1999), Fogliatto et al (2003), Shuiabi et al (2005) Browne et al (1984), Parker and Wirth (1999), Shuiabi et al (2005) Browne et al (1984), Fogliatto et al (2003), Shuiabi et al (2005) Malhotra and Ritzman (1990), Gupta and Goyal (1992), Nandkeolyar and Christy (1992), Fogliatto et al (2003), Zhang et al (2003) Malhotra and Ritzman (1990), Gupta and Goyal (1992), Nandkeolyar and Christy (1992), Zhang et al (2003), Shuiabi et al (2005) Chen et al (1992), Dixon (1992), Suarez et al (1995, 1996), Upton (1995), Viswanadham and Srinivasa Raghavan (1997), Fogliatto et al (2003), Zhang et al (2003), Erol and G¨urb¨uz (2009) Dixon (1992), Suarez et al (1995, 1996), Upton (1995), Viswanadham and Srinivasa Raghavan (1997), Erol and G¨urb¨uz (2009) Dixon (1992), Suarez et al (1995, 1996), Upton (1995) Viswanadham and Srinivasa Raghavan (1997) Fogliatto et al (2003) Fogliatto et al (2003), Shuiabi et al (2005) Shuiabi et al (2005) Gong and Hu (2008) Gong and Hu (2008) Gong and Hu (2008) Erol and G¨urb¨uz (2009)
In this article, the flexibility types which have influence on the case company are categorized into two. Management flexibility category consists of strategic, marketing, R&D, spanning and design flexibility. Strategic flexibility is defined as the organization’s capability to identify major changes in the external environment quickly and dedicate resources to cope with these changes. Marketing flexibility is stated to be having a high global market share and the ability to sell its products in a large number of international and geographic markets. R&D flexibility includes both the design of the product function and the manufacturing technology, as well as the in-
A Flexibility Based Rapid Response Model in Ready to Wear Sector, in Turkey
183
novation of the resource and the research of the basic knowledge. Therefore, it can be explained as the speed for the company to produce and apply the new knowledge and technology. Spanning flexibility assures that different departments or groups (internal and external) coordinate product design, production, and delivery to boost the value of the products for the customers. The design flexibility for the company is the ability to change the design of a product very economically and quickly. Product-mix, new product, volume, machine and labor flexibilities are grouped under manufacturing flexibilities. Product-mix flexibility is the ability of the company to produce different mixture of products in a cost-effective and efficient way with a certain capacity. In today’s mass production environment, it is a difficult mission both to produce in mass numbers and become product mix flexible, producing different products during the same planning period. New product flexibility is expressed as the ability of a system to add or substitute new products to the product mix. It is a necessity in technology intensive markets to rapidly design and market several new products simultaneously. Volume flexibility is defined as the ability to change the level of output of a manufacturing process. It shows the competitive potential of the firm to increase production volume to meet rising demand and to keep inventory low as demand falls. Machine flexibility refers to the company’s capacity to switch between operation with minimum setup and delays. Labor flexibility is the ability of the workforce to be able to conduct a broad range of manufacturing tasks.
3 Quality Function Deployment The voice of the customer and the customer requirements are usually neglected in traditional production environments. QFD, introduced by Yoji Akao in 1966, is a structured approach for defining customer needs and transforming those into strategic plans. In short, this methodology answers the questions to define required product qualities for the customer desires, to develop the functions the product will serve, to provide customer needs satisfactorily. Quality Function Deployment begins with product planning and continues with product design and process design; it then finishes with process control, quality control, testing, equipment maintenance, and training. In order to go through all of these stages with success, multiple functional disciplines are required. All the functions of the company should be carefully coordinated for good communication, decision making and production planning. Quality Function Deployment stands out for several strengths. This method looks for both spoken and unspoken customer requirements to increase customer satisfaction. It focuses all product development activities on customer needs. Invisible requirements and strategic advantages are made visible by QFD and reduced time to market and design changes results in decreased design and manufacturing costs. Contrary to these advantages, this method also possesses a few disadvantages. Similar to other Japanese management techniques, some problems can occur when QFD is applied to western business environment and culture. Customer needs are found
184
M¨ujde Erol Genevois and Deniz Yensarfati
by market survey. If the survey is performed in a poor way, then the whole analysis results might be misleading. In applying QFD, it is assumed that the market survey results are accurate and customer needs which stay stable during the whole evaluation process, can be documented and captured. In this method, a “house of quality” is developed to define the required design and production capabilities/stages (i.e., marketing strategies, planning, product design and engineering, prototype evaluation, production process development, production, sales) against customer requirements. The house of quality mentioned above is developed in several steps. First the product or service attributes demanded by the customer are listed. Then relative importance of these attributes is weighed to lean more on the important ones. In order to benchmark the company abilities with its competitors on one side, customer perceptions are stated. At the top, the engineering capabilities that the firm needs to satisfy customer requirements are listed. In the middle of the house, lies the relationship matrix which matches the customers requirements, with the engineering capabilities and this matrix is filled in. On top of the engineering capabilities another matrix is formed as a correlation matrix to correlate the engineering capabilities with each other. Objective measures are listed to provide the level of each engineering capability of the competitors. Lastly, technical targets that the company should aim at are filled in for successful production & process planning. A representation of a house of quality is provided in Fig. 1.
Fig. 1 House of Quality Representation
Due to the fact that this article aims to develop a practical method to leverage managerial and manufacturing flexibilities, the major study on “house of flexibility” conducted by Olhager and West (2002) is investigated further and is used as a basis in the application. In their study, house of quality is transformed into house of flexibility where the steps are converted to find out abilities – competitive prior-
A Flexibility Based Rapid Response Model in Ready to Wear Sector, in Turkey
185
ities, relative importance, customer perceptions, flexibility characteristics, relationship matrix, correlation matrix, objective measures and target measures respectively. As stated in Olhager and West’s work, various quantitative methods have been combined with the QFD methodology to improve its reliability and objectiveness; AHP, multi-attribute utility theory, and linear programming methods are some examples. In the following section of the article the application and the results of the application are presented. Contrary to the four staged application of Olhager and West, a simplistic two stage modeling is preferred. Based on the literature review the model proposed in this article consists of abilities (stage I) and output flexibility (stage II) on the first house, output flexibility and management and manufacturing system (stage III) on the second house. The proposed model is demonstrated in Fig. 2.
Fig. 2 Proposed Model
4 The Application As mentioned previously, the textile sector contains uncertainty and risk factors. In order to pursue activities in several markets and make large profits, flexibility can be proposed as a solution and the application is constructed upon this thesis. The Turkish women’s ready-to-wear company mentioned in the application, manufactures its products in Turkey and exports to several countries namely France, Russia, USA and Poland. The main objective of the firm is to increase its market share and brand awareness in the current markets and to penetrate brand new geographies. The firm targets the high customer segment and aims to respond to all of the customer needs on time. It does not focus on mass production but operates like a boutique. As a result it should make the best use of its resources, facilities and capabilities. For all of the issues stated above, the firm should have a flexible structure in order to respond to customer needs effectively and quickly. The purpose of the application is to determine customer expectations, to find out the flexibility types and system factors to meet these expectations and to form the best flexibility portfolio to increase the profits and customer satisfaction. In this application, at the initiation step, the customer needs are discussed with the marketing, purchasing, finance, R&D department managers and the general coordinator of the case company. The most prominent customer needs in the high consumer segment are chosen by these experts. Then a questionnaire is prepared
186
M¨ujde Erol Genevois and Deniz Yensarfati
and the selected customer needs are evaluated. For handling both the complexity of the group decision making process and the fuzziness of the evaluation, fuzzy analytical hierarchy process (FAHP) is applied. This method is a further development of Saaty’s Analytical Hierarchy Process (Saaty, 1980) which is one of the most commonly used multi-criteria decision making methods in the literature. Although AHP is an easy approach to rank the alternatives, the experiences and judgments of the users are in fact linguistic and vague, and this method cannot fully grasp the evaluation patterns of the users. FAHP is a much better technique representing user preferences in quantitative data improved via use of fuzzy set theory (Buckley, 1985; Chang, 1996). For these reasons, FAHP is preferred to rank customer needs and is applied to solve the hierarchically structured problem. The hierarchy scheme is presented in Fig. 3.
Fig. 3 Hierarchy scheme
Vague data is presented by triangular fuzzy numbers (TFN) in the application. Each membership function is defined by three parameters (L, M, U), where L is the lowest possible value, M is the middle possible value and U is the upper possible value in the decision makers’ interval judgments (Alias et al, 2009) and the membership functions are provided in Table 2. The proposed TFN and linguistic variables related to Saaty’s scale of preference (Saaty, 1980) values are shown in Table 3. Table 2 Membership function of the fuzzy numbers Fuzzy Number 1˜ x˜ 9˜
Membership Function (1, 1, 2) (x − 1, x, x + 1) for x = 2, 3, 4, 5, 6, 7, 8 (8, 9, 9)
A Flexibility Based Rapid Response Model in Ready to Wear Sector, in Turkey
187
Table 3 Proposed TFN and linguistic variables Saaty’s scale of relative importance 1 3 5 7 9 2, 4, 6, 8
Linguistic variables Equal importance Moderate importance Essential importance Demonstrate importance Extreme importance Intermediate values between two adjacent judgments
TFN (1, 1, 2) (2, 3, 4) (4, 5, 6) (6, 7, 8) (8, 9, 9) (1, 2, 3), (3, 4, 5), (5, 6, 7) and (7, 8, 9)
The normalized weight vectors of the customer needs are calculated and the results are presented in Table 4. Table 4 Normalized weights vectors of the customer needs Criterion High Quality Feasible Price Trendy Marketing Activities Innovative Convenient and Comfortable Responsiveness to Customer Needs Product Variety Good design
Relative Weight 0.159 0.094 0.118 0.087 0.108 0.094 0.102 0.114 0.124
As seen from the relative weights in Table 2, in the high customer segment quality and good design are the most significant criteria. Trendiness and product variety criteria follow these two expectations. It is evident that the product’s trendiness, design or similar attributes are more important for the targeted consumers compared to its price. Subsequent the ranking of the customer needs, the house of flexibility model based on the work of Olhager and West (2002), is applied to match these needs with the flexibility capabilities the firm can acquire. The relative weights calculated in FAHP are used in the first house of flexibility as weights of the customer needs. The flexibility levers are selected by the same experts and they are asked to evaluate the relationship in between the flexibility levers and the customer needs. Here, the non-evaluated boxes indicate that there exists no relationship in between the need and the flexibility lever. The value of “1” indicates little correlation, “3” indicates more correlation and “9” indicates great correlation. The evaluation of the experts is provided in Figure 4. The results of the first house of flexibility are provided on following two tables (Tables 5 and 6). In the two tables presented above, it is seen that the most significant flexibility levers to meet the customer needs are spanning, R&D, new product and product-mix flexibilities. Then, the five experts are asked to determine and evaluate system fac-
188
M¨ujde Erol Genevois and Deniz Yensarfati
Fig. 4 House of flexibility model (First House) Table 5 R.W. of Management Flexibility Levers
Management Flexibility
Flexibility Category
Column Number 4 3 2 1 5
Flexibility Lever Spanning Flexibility R&D Flexibility Market Flexibility Strategic Flexibility Design Flexibility
Relative Weight (%) 22, 82 17, 78 7, 79 7, 82 6, 13
Table 6 R.W. of Manufacturing Flexibility Levers Manufacturing Flexibility
Flexibility Category
Column Number 7 6 9 8 10
Flexibility Lever New Prodcut Flexibility Product-Mix Flexibility Machine Flexibility Volume Flexibility Labor Flexibility
Relative Weight (%) 12,52 8, 04 7, 41 5, 84 3, 67
A Flexibility Based Rapid Response Model in Ready to Wear Sector, in Turkey
189
tors, which satisfy the flexibility types listed on the first house of quality. Below, the selected system factors namely capacity, product development process, production process, equipment technology, supply chain, organization structure, human resources and information systems, are emphasized briefly. As stated before, the case company operates like a boutique and due to this fact it is supposed to manufacture approximately 500 unique models in low quantities each season. These unique models are manufactured using different cloth types and manufacturing techniques, so it can be stated that for this company 500 different production processes exist. Because of this process variety, manufacturing factors such as production capacity, production equipment technologies and the workforce should be effectively managed to fulfill manufacturing flexibility. Product development process consists of all the operations in between supply research and design of the final prototype model. The procedures starting from the delivery of the final prototype model to the delivery of the final product to the warehouse such as fabric cutting, sewing, quality control, production planning, etc. are defined as the production processes. Supply chain is the system compromising all of the processes between purchasing of the supplies and the delivery of the end product to the customer. The case company has a wide supply chain; some of the supplies are imported, production facilities are located in Turkey, then the end products are exported to the stores located in several countries. On the other hand, one of the other system factors, organization structure, is the hierarchical structure of the departments in the firm where different functions, processes and responsibilities are distributed to several parties. The case company is formed in a functional organizational structure, and employing the right human resource in the appropriate departments should be one of the most important capabilities of the firm. Enterprise resource planning, customer relationship management, accounting and back office tools are used as information systems to retrieve data, process it into information and supply this information to the end users. In today’s business environment, managing accurate information on time brings competitive advantage; therefore, information systems are required and significant. The evaluation of these system factors with respect to the flexibility types are provided below (Fig. 5). Table 7 Importance weight of system factors 1 2 3 4 5 6 7 8 supply information organization human capacity equipment product production chain system structure resource technology development process process 13.15% 8.95% 21.26% 14.67% 6.77% 11.07% 15.83% 8.31%
The final results of the second house of flexibility are provided in the “Relative Importance (%)” row (Table 7) ; in here the most important factor for the case company is the organization structure. Product development process, human resource and supply chain follow this factor. By solving the second house of quality, flexibility levers are detailed down into system factors and the significant elements required
190
M¨ujde Erol Genevois and Deniz Yensarfati
Fig. 5 House of flexibility model (Second House)
for the case company to become flexible in the high consumer segment are discovered. At this point the application is concluded, although the third and fourth levels (houses) which have been detailed in Olhager and West (2002) work could have been applied for further analysis.
5 Conclusion Today, manufacturing enterprises are in severe competition and cannot survive on the market unless they respond to internal and external changes quickly. As the business conditions fluctuate and technological developments occur, the customers are less predictable in their behavior of purchasing. They expect the utmost from the manufacturers and use their buying power to impose their demands. In order to survive in the competition and to increase the market share, flexibility is proposed as a solution in many academic researches. The textile sector is a good example for an environment which possesses all of the challenges stated above. It is one of the fastest growing sectors in Turkey and this industry is very important for the country’s economy due to its large share in exports. For the reasons stated above, a Turkish textile firm is selected for the application and the best flexibility portfolio is investigated.
A Flexibility Based Rapid Response Model in Ready to Wear Sector, in Turkey
191
As the methodology, a decision making model is developed by combining fuzzy analytic hierarchy process (FAHP) and quality function deployment (QFD). The application is conducted for a Turkish women’s ready-to-wear firm in the high consumer segment and its purpose is to find out the best flexibility lever portfolio satisfying customer needs and providing a quick solution to deal with the uncertainties and the risks this company faces. First of all, the customer needs are ranked using FAHP, then the flexibility levers matching the customer needs are weighted via two phased QFD based on the study of Olhager and West (2002) and lastly, management and manufacturing system factors are evaluated. The results of this evaluation point out the fact that the customers in this segment value the product design, trendiness and similar product characteristics more than its price. Management and manufacturing flexibilities should be incorporated as a whole for better evaluation of the customer needs. According to the first house of flexibility constructed, spanning flexibility, the ability to make sure that different departments or groups (internal and external) coordinate product design, production, and delivery to boost the value of the products for the customers, is the most significant flexibility lever. As long as the design, production, marketing and other departments work in coordination, the company becomes agile and flexible in responding to the environmental changes. The relative weights of the other flexibility levers can be found in Tables 3 and 4. The second house of flexibility is constructed to evaluate the flexibility levers with respect to the system factors. According to the evaluation of the experts, organization structure is found to be the most important factor. The results of the application are shared and discussed with the experts. In future work, the results of the combined methodology in this study will be used to construct portfolios. These portfolios will be evaluated via real option, a quantitative valuation method, which incorporates the uncertainties such as market demand, labor supply and cost, material supply and cost, inflation, etc. and does not ignore the effect of flexibility in decision making processes. This approach will enable the firm to choose the best portfolio and to satisfy the customer needs. Acknowledgements This research has been financially supported by Galatasaray University Research Fund.
References Alias M, Hashim S, Samsudin S (2009) Using fuzzy Analytic Hierarchy process for southern Johor River Ranking. Int J Adv Soft Comp Appl 1(1):62–76 Backhouse C, Burns N (1999) Agile value chains for manufacturing–implications for performance measures. International Journal of Agile Management Systems 1(2):76–82 Beach R, Muhlemann A, Price D, Paterson A, Sharp J (2000) A review of manufacturing flexibility. European Journal of Operational Research 122(1):41–57
192
M¨ujde Erol Genevois and Deniz Yensarfati
Benjaafar S, Ramakrishnan R (1996) Modelling, measurement and evaluation of sequencing flexibility in manufacturing systems. International journal of production research 34(5):1195–1220 Bernardes E, Hanna M (2009) A theoretical review of flexibility, agility and responsiveness in the operations management literature. International Journal of Operations and Production Management 29(1):30–53 Browne J, Dubois D, Rathmill K, Sethi S, Stecke K (1984) Classification of flexible manufacturing systems. The FMS magazine 2(2):114–117 Buckley J (1985) Fuzzy hierarchical analysis. Fuzzy sets and systems 17(3):233– 247 Chandra C, Everson M, Grabis J (2005) Evaluation of enterprise-level benefits of manufacturing flexibility. Omega 33(1):17–31 Chang D (1996) Applications of the extent analysis method on fuzzy AHP. European Journal of Operational Research 95(3):649–655 Chen I, Calantone R, Chung C (1992) The marketing-manufacturing interface and manufacturing flexibility. Omega 20(4):431–443 Cox T (1989) Toward the measurement of manufacturing flexibility. Production and Inventory Management Journal 30(1):68–72 Dixon J (1992) Measuring manufacturing flexibility: an empirical investigation. European Journal of Operational Research 60(2):131–143 Erol GM, G¨urb¨uz T (2009) Finding the best flexibility strategies by using an integrated method of FAHP and QFD. IFSA/EUSFLAT Conference pp 1126–1131 Fogliatto F, Silveira G, Royer R (2003) Flexibility-driven index for measuring mass customization feasibility on industrialized products. International Journal of Production Research 41(8):1811–1829 Genevois E, G¨urb¨uz T (2009) Finding the best flexibility strategies by using an integrated method of FAHP and QFD. In: IFSA/EUSFLAT Conference, pp 1126– 1131 Gerwin D (1987) An agenda for research on the flexibility of manufacturing processes. International Journal of Operations and Production Management 7(1):39–49 Gerwin D (1993) Manufacturing flexibility: a strategic perspective. Management Science 39(4):395–408 Gong Z, Hu S (2008) An economic evaluation model of product mix flexibility. Omega 36(5):852–864 Gupta Y, Goyal S (1989) Flexibility of manufacturing systems: Concepts and measurements. European Journal of Operational Research 43(2):119–135 Gupta Y, Goyal S (1992) Flexibility of manufacturing systems: Concepts and measurement. European Journal of Operational Research 60(2):166–182 Gupta Y, Somers T (1996) Business strategy, manufacturing flexibility, and organizational performance relationships: a path analysis approach. Production and Operations Management 5(3):204–233 Hill T (1995) Manufacturing strategy: text and cases. Macmillan, London Hyun J, Ahn B (1992) A unifying framework for manufacturing flexibility. Manufacturing Review 5(4):251–259
A Flexibility Based Rapid Response Model in Ready to Wear Sector, in Turkey
193
¨ ut C (2003) D¨unya’da ve T¨urkiye’de tekstil-hazir giyim sekt¨orleri Kanoglu S, Ong¨ ve T¨urkiye’nin rekabet g¨uc¨u. Tech. Rep. 2668, Devlet Planlama Teskilati Iktisadi Sekt¨orler ve Koordinasyon Genel M¨ud¨url¨ug¨u Koste L, Malhotra M (1999) A theoretical framework for analyzing the dimensions of manufacturing flexibility. Journal of Operations Management 18(1):75–93 Malhotra M, Ritzman L (1990) Resource flexibility issues in multistage manufacturing. Decision Sciences 21(4):673–690 Mascarenhas M (1981) Planning for flexibility. Long Range Planning 14(5):78–82 Nagarur N (1992) Some performance measures of flexible manufacturing systems. International Journal of Production Research 30(4):799–809 Nandkeolyar U, Christy D (1992) An investigation of the effect of machine flexibility and number of part families on system performance. International Journal of Production Research 30(3):513–526 Newman W, Hanna M, Maffei M (1993) Dealing with the uncertainties of manufacturing: flexibility, buffers and integration. technology 13(1):19–34 Olhager J (1993) Manufacturing flexibility and profitability. International journal of production economics 30:67–78 Olhager J, West B (2002) The house of flexibility: using the QFD approach to deploy manufacturing flexibility. International Journal of Operations and Production Management 22(1):50–79 Parker R, Wirth A (1999) Manufacturing flexibility: measures and relationships. European journal of operational research 118(3):429–449 Ramasesh R, Jayakumar M (1991) Measurement of manufacturing flexibility: a value based approach. Journal of Operations Management 10(4):446–467 Saaty T (1980) The analytical hierarchy process. McGraw-Hill, New York Sethi A, Sethi S (1990) Flexibility in manufacturing: a survey. International Journal of Flexible Manufacturing Systems 2(4):289–328 Shuiabi E, Thomson V, Bhuiyan N (2005) Entropy as a measure of operational flexibility. European Journal of Operational Research 165(3):696–707 Stanev S, Krappe H, Ola H, Georgoulias K, Papakostas N, Chryssolouris G, Ovtcharova J (2008) Efficient change management for the flexible production of the future. Journal of Manufacturing Technology Management 19(6):712–726 Su´arez F, Cusumano M, Fine C (1991) Flexibility and performance: a literature critique and strategic framework. Working papers pp 50–91 Suarez F, Cusumano M, Fine C (1995) An empirical study of flexibility in manufacturing. Sloan management review 37(1):25–32 Suarez F, Cusumano M, Fine C (1996) An empirical study of manufacturing flexibility in printed circuit board assembly. Operations Research 44(1):223–240 Swamidass P, Newell W (1987) Manufacturing strategy, environmental uncertainty and performance: a path analytic model. Management Science 33(4):509–524 de Treville S, Bendahan S, Vanderhaeghe A (2007) Manufacturing flexibility and performance: bridging the gap between theory and practice. International Journal of Flexible Manufacturing Systems 19(4):334–357 Upton D (1994) The management of manufacturing flexibility. California management review 36(2):72–89
194
M¨ujde Erol Genevois and Deniz Yensarfati
Upton D (1995) Flexibility as process mobility: the management of plant capabilities for quick response manufacturing. Journal of Operations Management 12(3–4):205–224 Vickery S, Dr¨oge C, Markland R (1997) Dimensions of manufacturing strength in the furniture industry. Journal of Operations Management 15(4):317–330 Viswanadham N, Srinivasa Raghavan N (1997) Flexibility in manufacturing enterprises. Sadhana 22(2):135–163 Wadhwa S, Rao K (2003) Enterprise modeling of supply chains involving multiple entity flows: role of flexibility in enhancing lead time performance. Studies in Informatics and Control 12(1):5–20 Wiendahl H, Hemandez R (2002) Fabrikplanung im blickpunkt-herausforderung wandlungsf¨ahigkeit, werkstattstechnik. URL http://www.technikwissen.de/wt Zhang Q, Vonderembse M, Lim J (2002) Value chain flexibility: a dichotomy of competence and capability. International Journal of Production Research 40(3):561–583 Zhang Q, Vonderembse M, Lim J (2003) Manufacturing flexibility: defining and analyzing relationships among competence, capability, and customer satisfaction. Journal of Operations Management 21(2):173–191
Modular Product Architecture: The Role of Information Exchange for Customization AHM Shamsuzzoha and Petri T. Helo
Abstract In order to comply with growing customization, firms are looking for flexible design architecture in their product development (PD) processes. This flexible design architecture would enable easy, low cost and faster PD process. The basic requirement for design flexibilities is to ensure proper information flow within the design architecture. The role of information exchange influences the basic architecture of product development process, which also affects the general theme of product customization. In today’s volatile business environment, proper way of information management could be an added value for manufacturing firms. This paper deals with the importance of information flow modeling over product architecture. The basic principle of product architecture and its effect on product customization process are reported. A case study is presented within the scope of this paper with a view to model the information tracking system applicable for design architecture and product customization. Key words: product architecture, modular design, design structure matrix (DSM), domain mapping matrix (DMM), customization
1 Introduction 1.1 Background and Motivation Globalization of business environment exerts extra pressure on the practice of product development across a wide range of firms. In this environment, customers are AHM Shamsuzzoha (B) and Petri T. Helo Department of Production, University of Vaasa, Finland, e-mail:
[email protected] Petri T. Helo e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 14,
195
196
AHM Shamsuzzoha and Petri T. Helo
choosier to achieve individualized products or services for their essential needs (Buxey, 2000; Kumar and Liu, 2005; Kleinschmidt et al, 2007; Panchal and Fathianathan, 2008). This represents a major transformation for business, where manufacturers are forced to customize their products to some degree, for increasingly selective customers or to compete in niche markets (Pine, 1993; Pine and Gilmore, 2007; Kumar, 2007; Piller, 2007). However, products that are not designed well after consulting with potential customers, could lead to slow and costly customization process (Dellaert and Stremersch, 2005; Piller et al, 2006; Pollard et al, 2008). This might trigger possible damage in business situation, which consequently incurs revenue loss for the corresponding firms. In order to achieve higher customization level, firms need to coordinate their existing product architecture and the level of information flow within complete production process. This coordination can be facilitated through managing valuable information exchange between designers, manufacturers and customers. Appropriate tools or methods are required to access, control and formulate this information exchange. Efficient and effective information flow can ensure to design and fabricate custom-built products with higher customer satisfaction (Eppinger et al, 1994; Forza and Salvador, 2001). Insufficient information normally leads to the problem that designers do not work with the current version of the product specifications, especially more complex products. This lagging of information processing results in that the components or parts designed by one designer may not match those by other designers due to the lack of coordination and communication among them. It is therefore, very crucial to provide available information at the very early stage of product development. The motivation of this research is to elaborate on how to design customized products within the scope of manufacturers’ competitive strategies. The suggestion is provided to offer both designers and managers the latest tools and technologies in order to justify their designed products, which satisfy most of their customers’ needs and wishes. Up-to-date methodologies and tools support designers, manufacturers and customers to keep track of information flow among product development participants (S¨oderquist and Nellore, 2000; Kiritsis et al, 2003; Finne, 2006). Along with participatory design issue, tracking information exchange, appropriate design architecture and strategy could contribute to a successful business environment.
1.2 Overview of Product Architecture Product architecture can be defined as a scheme by which the function of a product is mapped to physical components (Ulrich, 1995). Its essential feature is to define the specification of the interfaces among interacting physical components. It is often established during the system-level design phase of the product development process (Mikkola, 2000). The product architecture plays a key role in managerial decision making process and show a basic performance indicator for the manufacturing firms. It enables the product planner to position different issues of product
Modular Product Architecture: The Role of Information Exchange for Customization
197
design such as costing, lead-time, changeability, etc., and manages the complexity of design problems. It is therefore, very important to choose appropriate product architecture in order to achieve a competitive business environment. There exists two different types of product architectures in the design theory namely, integral and modular. In integral architecture, there is a complex (non oneto-one) mapping between functional elements and physical components of the product. In this case, interfaces, which are shared between the components, are tightly coupled (Ulrich, 1995; Chmarra et al, 2008). In this architecture, it is not possible to change any parts or components without changing other parts or components which results in less flexibility in the design philosophy. Conversely, modular product architecture includes a one-to-one mapping from functional elements to the physical components of the product. This architecture enables firms to achieve functional changes with minimum physical changes of parts or components. Modular product architecture offers an approach to architecting a product family that shares inter-changeable modules. In this approach, products are developed from a set of independent components or modules connected via defined interfaces (Ulrich et al, 1995). Modular architecture exhibits product platform, from which derivative of products are created easily through substitution of add-on modules (Jiao and Tseng, 2000; Dahmus et al, 2001; Muffatto and Roveda, 2002). This type of architecture is predominantly found in technology intensive industries such as telecommunications, electronics or the automobile sectors (Sanchez and Mahoney, 1996; Staudenmayer et al, 2005). The concept of modular architecture strongly aligned with technological innovation and considered as a source of product innovation (Afuah, 1998; Chesbrough and Prencipe, 2008). It enhances customization solutions through transforming product development process from integral to modular design.
1.3 Information Management Perspective The information management facilitates the exchange and updating of relevant product data, which is characterized by product specifications and co-ordination among design elements. This approach is devoted to delivering the right information within a particular time frame in view of coordinating product development participants. Insufficient and inefficient communication generally leads to the problem, of designers or engineers not working with the predefined specifications, especially for the complex products. In such situations, an intrinsic communication pattern among the product development participants is needed for developing quality products with due date. In general, the ability to access and exchange information among various parties such as vendors, customers and manufacturers is an issue that needs to be addressed. Breaking down the barriers of information exchange is beneficial to the collaboration between development partners, which ensures quality product and services for potential customers (Joglekar and Yassine, 2001; Yassine, 2004). Generally, prod-
198
AHM Shamsuzzoha and Petri T. Helo
uct design activity proceeds without considering information from its predecessor activities that might flaw the design performance and deteriorate the development activities (Ha and Porteus, 1995). Two types of design information are commonly available in design literature, namely; static and dynamic (Yassine et al, 2008). In the static situation, design information does not change over time whereas information evolves over time in the dynamic situation. The rest of the paper is organized as follows: in Section 2 research scope and objectives are presented, while in Section 3 the theoretical framework of this research are stated. Section 4 discusses the overview of information management tool, DSM. Section 5 illustrates an empirical research performed in a case company, whereas Section 6 outlines study results and limitations from the case study. The overall outcomes from this research are discussed and concluded with future research directions in Section 7.
2 Research Scope and Objectives The aim of this research is to provide insights to the modeling of information exchange in terms of product design and development and to present how this information exchange can be modeled comfortably. Along with information perspective, the applicability of modular design architecture and its suitability are also outlined in this research. Both approaches are stated with a view to develop and deliver customized product to the potential customers. We have therefore, articulated two research objectives within the scope of this paper as follows: (i) (ii)
to describe the importance and usability of information exchange among product development participants; to formulate modular design architecture and its applicability in customized product design and development.
In order to accommodate these study objectives, two research questions are proposed as: RQ 1: RQ 2:
How managing information exchange affects the implementation of product development processes? How to model modular product development strategy for implementing product customization?
Modular Product Architecture: The Role of Information Exchange for Customization
199
3 Theoretical Framework 3.1 Modularity and Product Customization Due to shorter product life cycles and increasing trends of global competitionm firms have begun to select appropriate strategy in their business environments. The success of industrial firms depends on their products prices, differentiations, competitive positioning, qualities, etc. (Lee and Zhou, 2000; Jiao et al, 2007; Calantone and Di Benedetto, 2007). In this business environment, a firm that possesses a higher clock speed would enhance its existing product development strategies to hedge against market competition. Adopting particular strategies such as customization, gaining market share, technology enhancement, risk management, etc., are the key options for business success (Broekhuizen and Alsem, 2002; Slater and Mohr, 2006; Amoako-Gyampah and Acquaah, 2008). Negotiable strategies could support firms to deal with uncertainties and business risks in such a competitive market. In today’s customization era, firms cannot focus on mass production strategy rather than to provide products, which are better adapted to individual customers aesthetic and functional preferences (Piller, 2007; Franke and Schreier, 2008). To fulfill the requirements of mass customization, manufacturing firms need to consider several development strategies such as modularity, platform-based product development, creating product family, components standardization etc. (Karandikar and Nidamarthi (Karandikar and Nidamarthi, 2007; Lau Antonio et al, 2007; Zacharias and Yassine, 2008; Shamsuzzoha, 2010). All these strategies needs to be implemented according to the objectives and goals of individual firms based on their customer requirements, production complexities and volumes. Individual firms might adopt and implement single strategy or multiple strategies according to their customers’ requirements. The adopted strategy or strategies are needed to be focused for developing customized product. McCutcheon et al (1994) suggested that modular product design is the best way to provide product variety and production speed, which alleviates the customization process by fulfilling customer demands for varieties and reducing delivery times simultaneously. Modular design also can reduce the number of interfaces and variety of components while offering a greater range of final products (Duray et al, 2000). This design architecture allows for developing a variety of products by providing the base architecture named as platform. Implementing such a platform offers important family design savings and easy manufacturing (Jose and Tollenaere, 2005). Other strategic options like component commonality (Thevenot and Simpson, 2007; Shamsuzzoha, 2009), product platform (Simpson, 2004; Huang et al, 2005), product family (Meyer et al, 1997; Jiao and Tseng, 2000), product postponement (Yang et al, 2004; Su et al, 2005) etc., contribute highly to the customized product design and development with lower cost and reduced lead time. Product customization can be defined as producing a physical good or a service that is tailored to a particular customer’s requirements. It is nowadays the driving force to potential growths and measurements of performance indices for
200
AHM Shamsuzzoha and Petri T. Helo
firms. The various regulators that influence the control of the customized products are displayed in Fig. 1. It is obvious that product customization needs continuous flow of information between the customers and the manufacturers of the products or services. This information needs to be prescreened before being considered for any design changes. To offer customized products seems to be a source of competitive advantage because the ability to develop customer-tailored products can be marketed as differentiating and distinctive capability that provides customers with superior values (Blecker et al, 2005).
Regulators Customized Product
• • • • •
Product information Product configuration Customer requirement Knowledge transfer Production data
End Customer
Fig. 1 Product customization framework
3.2 Modularization and Module Drivers Modularization can be defined as the opportunity for mixing and matching of components in a modular product design in which the standard interfaces between components are specified to allow for a range of variation in components to be substituted in product architecture (Mikkola, 2000; Fixson, 2007). In this process, the module, which is a structurally independent building block of a larger system with well-defined interfaces, provides flexibility without changes to the overall design (Ericsson and Erixon, 1999; Baldwin and Clark, 2000). Through product modularization, firms can create many product variants by assembling different modules within short product development lead-time (Simon, 1962; Sanchez and Mahoney, 1996; Baldwin and Clark, 1997; Salvador et al, 2002). A modular system boosts the rate of innovation, as it reduces the time to respond to competitors and can spur innovation in design as the firms can independently experiment with new products and ideas (Baldwin and Clark, 1994). Modularization process reduces the cost of developing product variety through combining the old and new versions of modules or sub-assemblies within a firm. This process of mixing and matching can be a source of learning for firms and allows them to specify required rules for inter-modules interfacings. Modular architecture allows the customers to take advantage of interchangeable components or modules rather than accept the complete package that is preselected by the manufacturers. There are several drivers to convince manufacturers to adopt modular phenomenon in their respective firms. These drivers may be varied according to the
Modular Product Architecture: The Role of Information Exchange for Customization
201
requirements of individual firms. Generic module drivers can be identified as; technology evaluation, planned product changes, styling, after sales service, separate testing, etc. (Ericsson and Erixon, 1999). For instance, if a firm offers after sales service to its customers, it will then encourage the adopting of modular architecture, which easily facilitates its operation. Another example could be cited as the module driver of separate testing. If a firm likes to test separately some of its important subassemblies, it could then proceed towards adopting the modular principle.
4 Information Management Tool: Design Structure Matrix (DSM) In order to model the information flow among product architecture, we have used the matrix tool DSM developed by Steward (1981). This tool helps to display the interactions or information exchange among design elements and processes them for developing modules required for modular product development. The DSM method differs from conventional design tools such as PERT, CPM, Gantt chart etc., is that it focuses on representing information flows rather than work flows (Yassine et al, 2004). This method is an information flow model, which allows the representation of complex tasks or team relationships with a view to ordering or grouping for the tasks or teams being modeled. It improves the planning, execution and management of complex product development processes.
A
D
E
C
G
F I
B (a)
H
A B C D E F G H I
A B C D E F G H I X X X X X
X
X X X
(b)
Fig. 2 a Component interdependencies graph; b DSM representations of components interdependencies graph (un-clustered)
The DSM can be implemented to display compact, matrix representation of design architecture or a project network (Steward, 1981). Four types of DSM are commonly available as activity-based, parameter-based, team-based and product architecture or component-based (Browning, 2001). This tool is implemented for tracking the flow information dependencies among design elements or project tasks.
202
AHM Shamsuzzoha and Petri T. Helo
The DSM provides insights of a complex design structure and formulates clustering for developing modules. The representation of the DSM tool can be explained with an example as depicted in Fig. 2 below. Figure 2a displays a simple graph of components interdependencies and its equivalent DSM representation is displayed in Fig. 2b. Figure 2a, displays nine components of a product namely A, B, C, D, E, F, G, H, I and their interdependencies. For instance, component C needs information from component B, while component A and H need information from component C to be completed. This is presented in a matrix format in Fig. 2b. All other information exchanges or interdependencies among components are displayed in Fig. 2b. In order to reduce the iteration time, upper diagonal marks need to be brought as close as possible to the diagonal line. This is done by clustering, where the rows and corresponding columns are rearranged to get clusters or modules. Figure 3 shows two overlapping clusters.
Fig. 3 DSM representations of component interdependencies (clustered)
A C B F I G H D E A X C X B X F X X I X G H X X D X E
In DSM analysis, both partitioning and clustering operations are done with different perspectives. Partitioning is the sequencing (i.e. reordering) of the DSM rows and columns in such a way that it does not contain any feedback marks, thus transforming the DSM into a lower triangular form (Steward, 1981; Yassine and Falkenburg, 1999). The reason behind this transformation is the significance of upper-diagonal marks, which represented feedback information flows. Partitioning is used for time-based DSM and parameter-based DSM. On the other hand, when the DSM elements represent design components or teams, the goal of the matrix manipulation changes significantly from that of partitioning algorithms. The new goal becomes to find the subsets of DSM elements (i.e. clusters or modules) that are mutually exclusive or minimally interacting. This process is termed clustering (Yassine et al, 2004). Clustering is done for team-based DSM and component-based DSM (Browning, 2001).
Modular Product Architecture: The Role of Information Exchange for Customization
203
5 Empirical Research: A Case Example This empirical research was conducted in a case company with a view to justify the two research questions and to display how these could be implemented for the benefits of the case company. All the research questions and the related underlying theoretical aspects of the study are addressed within the scope of this case study. Both theoretical and practical analysis and interpretation have been developed in an iterative manner within this empirical study. The studied company is a global leader in complete lifecycle power solutions for the marine and energy markets. It produces diesel engines of 6 to 18 cylinders with varying capacities from 2880 KW to 9000 KW according to customers’ requirements. This study was modeled through critical investigation of the proposed research issues as information perspective, and modular design strategy essential for developing customer specific product. In order to justify the research perspectives, real data was collected from the case company. This data was modeled to test and verify the study issues. The research work presented in this paper was limited due to the level of confidentiality. This work could not be presented in detail due to possible violations of confidentially in the case organizations. In such a perspective, a broad coverage of the research has clearly been a challenge. This constraint limits the interpretation of the results and may not be applicable in a generic format.
5.1 Study Method or Approach This empirical research was conducted through active participation in the daily assembly operation of the case company. The objectives of this study were to investigate the suitability of the existing engine architecture and to suggest designers or managers for the possibility of undertaking optimum level of modular design based on components interdependencies. In order to fulfill these objectives, existing interfaces between components to components were studied and mapped to matrix format by implementing the DSM tool named “PSM32”. The company’s existing dependency pattern among components was collected through face-to-face interviews with the engineers, designers, workers, managers and also from the company’s standard register. During the study period, several meetings were organized within the case company with a view to discuss the existing engine architecture, components interfacing, design strategy of the developed engines, customers’ preferences, company’s business strategy and current bottlenecks or problems. These discussions lead to harness possible improvements or recommendations for the existing design architecture and strategic issues within the company. Two specific issues were identified within the scope of this research study. Firstly, to investigate the existing information exchange structure among engine components and how it affects the overall design architecture. Secondly, to harness the suitability of modular design architecture and how it does apply for producing customized engines.
204
AHM Shamsuzzoha and Petri T. Helo
5.2 Deployment Example The information interdependencies between engine components as collected from the case company were modeled by the DSM tool. For analytical purposes, we have modeled all the 218 components of the case company’s engine and their corresponding interactions or dependencies are displayed in Fig. 4. Each of the components was placed in the row and corresponding column of the matrix. The dependencies of each of the components with others were marked by three different numbers according to level such as “1” representing the highest, “2” for medium and “3” for lowest dependency level. All these dependencies were collected by exchanging information with the workers, designers, engineers and managers of the company. There might be the possibility of biasness of dependency level from one person to other, which could affect the actual result from this study. However, every effort was made in order to make the information dependencies as consistent as possible. In order to investigate the modular principle, we have clustered all the components in Fig. 4 by a clustering operation in which all the components are placed as close as to the matrix diagonal. This clustering operation is very much useful in order to form the required clusters or modules within the developed product or engine in this case. The developed modules then guide designers on how the specific components are joined together to form the target module. The formation of modules also guide the assemblers on how to interface them themselves in order to develop
Fig. 4 Case company’s engine component dependency (un-clustered)
Modular Product Architecture: The Role of Information Exchange for Customization
205
the end product. After the clustering operation, the resulted clustered matrix of the case company is displayed in Fig. 5. From Fig. 5, we could observe four clusters/modules where the biggest one consists of 75 components, second biggest one consists of 7 components and the rest two modules consists of 2 components each. It is noticed that the biggest module termed as “Basic module” is very much integrated in architecture, which is a bottleneck of the company’s design architecture. This integrated architecture evolves due to an important component “engine block”, which is tightly interfaced with other components. As most of the components are directly or indirectly mounted or integrated with engine block, measures should need to be taken to study the architecture of the engine block in more detail. Initially it is suggested to break the engine block in subsections, which can be assembled according to customers’ demands. This suggested architectural change of the engine block might reduce the dependencies of the components with the engine block but there will arise some additional problems. For instance, components like crank shaft, sump, exhaust pipe etc., also need to be split in order to adjust with the decomposed engine block. It therefore demands a substantial effort to analyze the tradeoff between the costing and the improvement of overall performance of the proposed engine architecture. In this study, the architecture of other sub-assemblies such as the turbocharger bracket system, exhaust gas pipe system and fuel injection system are also investigated to find possible bottlenecks of their corresponding design architecture. It is observed that the architectures of three sub-assemblies are not truly modular and standard, which can be used for any model of the engine. This study therefore suggested that the
Fig. 5 Case company’s engine component dependency (clustered)
206
AHM Shamsuzzoha and Petri T. Helo
company’s management make extra efforts in order to make three sub-assemblies as standard and modular as possible. Both modularization and standardization of the sub-assemblies can happen through developing as standard components as possible and avoiding non-standard components within the architectures. In such a case, there might be a need to redesign or modify some of the components to make three sub-assemblies standard for any model of the developed engine. If the components of the sub-assemblies were more standard, it would be easy to formulate the required modular architecture. It is common that standard components enhance modularity whereas, non-standard or unique components degrade the modularity level substantially. If the designs of the sub-assemblies are converted to fully modular, it would be quite easy for the case company to meet diverse customers’ needs. It also helps for maintaining the engine as customers could order for specific module(s) or sections of module(s) in case of repairs for certain parts of the engine, without changing the complete engine or sub-assemblies.
6 Study Results and Limitations This research provides new insights and enhances knowledge in the field of managing product design for customization. The main objective has been to understand the management of information flow by combining the modularization principle for developing customized products. It is observed in the literature that the information management process in firms is not properly managed or stored (Vickers, 1983, 1984; Entsua-Mensah, 1996). Firms mostly look for expert knowledge to stay competitive in the market; however, this knowledge or information is not properly stored for future use. There is a lack of proper tools or methodologies to store valued information and to exchange information among product development participants (Browning et al, 2002; Eppinger and Salminen, 2001). The first research question tries to answer this dilemma within the boundary of product design and development. Practical problems for information exchange are considered and expected solution approaches are formulated within the scope of this research. In the present study, we have attempted to answer the second research question, which addresses a major concern in manufacturing organizations. To cope with today’s customization or individualization trends, firms need suitable product design and development in order to build relationships among customers, designers and manufacturers. From this research, we have noticed that along with the information perspective, firms are also lagging behind in terms of appropriate product design and development measures, which are the base requirements for enhancing productivity and customer satisfaction. Specific measures such as modularity or modular design are widely used by firms, although there are limited tools or methodologies to justify its usability. The measure of modularity, its effectiveness and suitability are discussed and evaluated in this paper. The detailed concept of modular product
Modular Product Architecture: The Role of Information Exchange for Customization
207
architecture and its usability for product customization is elaborated through a case example. There were some limitations to conduct this study. First of all, in the empirical part of the study, the information dependencies are not analyzed financially but rather on a theoretical basis only. Any strategic decision always needs to be financially validated before considering its applicability. This limitation can be overcome by taking firms’ financial capability as a strategic point of view. Secondly, the information dependencies among components were collected from various sources such as designers, engineers, managers and workers in which case there was always a possibility of personal bias for interpreting the dependency level that might affect the results of this research. Finally, this research has been a qualitative study and based on single case study in which the results should be interpreted cautiously and too much generalization needs to be avoided. More case studies might be useful to validate the matrix methodology as presented and discussed within the scope of this paper.
7 Discussion and Conclusions The study conducted within the case company was very fruitful in terms of research perspectives. The main target of this study was to investigate the information flow within the company with respect to design architecture, customers’ preferences and business strategies. In order to model this information flow, we implemented DSM tool “PSM32” to measure the level of information dependency and to investigate how this issue affects the overall design architecture of the case company. In terms of design architecture, we studied the existing product architecture within the company and the overall information dependencies among components and sub-assemblies. These dependencies were modeled to formulate the current practice and to look for possible bottlenecks. After analyzing the company’s present engine architecture, it was observed that the architecture is mostly an integral one with limited forms of modular architecture, too. This is due to the main part of the engine being “engine block”. Almost all components or modules being tightly fitted to the engine block result in a highly integrated structure. This is also displayed in our analysis (Fig. 5), where the major portion of the components is tightly clustered due to strong interdependencies. This is one of the major bottlenecks in the case company. In order to overcome this bottleneck, splitting the engine block into several pieces or modules and joining them together based on power demands (number of cylinders) is suggested. This might mean rearranging or redesigning some of the components or modules to improve design flexibility in the engine assembly process. The two objectives of this study were fulfilled through both theoretical and empirical perspectives. The importance of the information flow was illustrated theoretically as well as practically with the basic dependency structure of the complete engine. It is observed that the information dependency is tightly related with the
208
AHM Shamsuzzoha and Petri T. Helo
overall performance of the engine in terms of designing, manufacturing, assembly and delivery to the market. This information dependency also formulates the required modules for a modular product. This modular product architecture is very much applicable for developing basic platform, from which a stream of product variants are possible to produce with minimum time and least cost. This strategy also triggers the concept of mass customization that is a competitive means for today’s business success. In order to facilitate customization, firms need to examine several strategic options but there is a lack of methodology to select the optimal one. There are potential research areas, where researchers could find tools or methods for selecting the best strategic option suitable for a firm. The dynamic of the product customization process is required based on product design and development strategies. In today’s business, modularity is widely used by many leading firms globally. There are many ways to formulate specific module functionalities, but none of them is obviously appropriate for business success. To this day, there have been no specific rules or metrics to form optimum numbers of modules for a certain product or product family. There is a potential scope to extend this research in this particular area and especially to focus on the issue of modular rules.
References Afuah A (1998) Innovation management: Strategies, implementation, and profits. Oxford University Press New York Amoako-Gyampah K, Acquaah M (2008) Manufacturing strategy, competitive strategy and firm performance: An empirical study in a developing economy environment. International Journal of Production Economics 111(2):575–592 Baldwin C, Clark C (1994) Modularity-in-design: An analysis based on the theory of real options. Working Paper 93-026, Harvard Business School, Boston, MA Baldwin C, Clark K (1997) Managing in an age of modularity. Harvard Business Review 75(5):84–93 Baldwin C, Clark K (2000) Design rules. Vol. 1: The power of modularity. MIT Press, Cambridge, MA Blecker T, Friedrich G, Kaluza B, Abdelkafi N (2005) Information and management systems for product customization. Springer Science + Business Media, Inc, USA Broekhuizen T, Alsem K (2002) Success factors for mass customization: A conceptual model. Journal of Market-Focused Management 5(4):309–330 Browning T (2001) Applying the design structure matrix to system decomposition andintegration problems: A review and new directions. IEEE Transactions on Engineering Management 48(3):292–306 Browning T, Deyst J, Eppinger S, Whitney D (2002) Adding value in product development by creating information and reducing risk. IEEE Transactions on Engineering Management 49(4):443–458
Modular Product Architecture: The Role of Information Exchange for Customization
209
Buxey G (2000) Strategies in an era of global competition. International Journal of Operations and Production Management 20:997–1016 Calantone R, Di Benedetto C (2007) Clustering product launches by price and launch strategy. Journal of Business Industrial Marketing 22(1):4–19 Chesbrough H, Prencipe A (2008) Networks of innovation and modularity: A dynamic perspective. International Journal of Technology Management 42(4):414– 425 Chmarra M, Arts L, Tomiyama T (2008) Towards adaptable architecture. ASME 2008 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, New York, USA Dahmus J, Gonzalez-Zugasti J, Otto K (2001) Modular product architecture. Design Studies 22(5):409–424 Dellaert B, Stremersch S (2005) Marketing mass-customized products: Striking a balance between utility and complexity. Journal of Marketing Research 42(2):219–227 Duray R, Ward P, Milligan G, Berry W (2000) Approaches to mass customization: Configurations and empirical validation. Journal of Operations Management 18(6):605–626 Entsua-Mensah C (1996) Towards effective information management: A view from Ghana. International Journal of Information Management 16(2):149–156 Eppinger S, Salminen V (2001) Patterns of product development interactions. Proceedings of International Conference on Engineering Design, ICED’01 Eppinger S, Whitney D, Smith R, Gebala D (1994) A model-based method for organizing tasks in product development. Research in Engineering Design 6(1):1–13 Ericsson A, Erixon G (1999) Controlling design variants: Modular product platforms. ASME Press, New York, NY Finne C (2006) Publishing building product information: A value net perspective. Construction Innovation 6(2):79–96 Fixson S (2007) Modularity and commonality research: Past developments and future opportunities. Concurrent Engineering Research Applications 15(2):85–111 Forza C, Salvador F (2001) Information flows for high-performance manufacturing. International Journal of Production Economics 70(1):21–36 Franke N, Schreier M (2008) Product uniqueness as a driver of customer utility in mass customization. Marketing Letters 19(2):93–107 Ha A, Porteus E (1995) Optimal timing of reviews in concurrent design for manufacturability. Management Science 41:1431–1447 Huang G, Simpson T, Pine II B (2005) The power of product platforms in mass customisation. International Journal of Mass Customisation 1:1–13 Jiao J, Tseng M (2000) Fundamentals of product family architecture. Integrated Manufacturing Systems 11:469–483 Jiao J, Simpson T, Siddique Z (2007) Product family design and platform-based product development: A state-of-the-art review. Journal of Intelligent Manufacturing 18:5–29 Joglekar N, Yassine A (2001) Management of information technology driven product development processes. In: Ganeshan, R, Boone, T (eds) New directions in
210
AHM Shamsuzzoha and Petri T. Helo
supply-chain management: technology, strategy, and implementation NY: AMACOM, New York Jose A, Tollenaere M (2005) Modular and platform methods for product family design: Literature analysis. Journal of Intelligent manufacturing 16:371–390 Karandikar H, Nidamarthi S (2007) Implementing a platform strategy for a systems business via standardization. Management 18:267–280 Kiritsis D, Bufardi A, Xirouchakis P (2003) Research issues on product lifecycle management and information tracking using smart embedded systems. Advanced Engineering Informatics 17:189–202 Kleinschmidt E, de Brentani U, Salomo S (2007) Performance of global new product development programs: A resource-based view. Journal of Product Innovation Management 24:419–441 Kumar A (2007) From mass customization to mass personalization: A strategic transformation. International Journal of Flexible Manufacturing Systems 19:533– 547 Kumar S, Liu D (2005) Impact of globalisation on entrepreneurial enterprises in the world markets. International Journal of Management and Enterprise Development 2:46–64 Lau Antonio K, Yam R, Tang E (2007) The impacts of product modularity on competitive capabilities and performance: An empirical study. International Journal of Production Economics 105(1):1–20 Lee C, Zhou X (2000) Quality management and manufacturing strategies in China. International Journal of Quality and Reliability Management 17:876–898 McCutcheon D, Raturi A, Meredith J (1994) The customization-responsiveness squeeze. Sloan Management Review 35:89–89 Meyer M, Tertzakian P, Utterback J (1997) Metrics for managing research and development in the context of the product family. Management Science 43:88–111 Mikkola J (2000) Modularization assessment of product architecture. Proceedings of DRUID’Winter Conference 2000, Hillerod, Denmark, January 2 Muffatto M, Roveda M (2002) Product architecture and platforms: Aconceptual framework. International Journal of Technology Management 24:1–16 Panchal J, Fathianathan M (2008) Product realization in the age of mass collaboration. In: Proceedings of ASME 2008 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, New York, USA Piller F (2007) Observations on the present and future of mass customization. International Journal of Flexible Manufacturing Systems 19:630–636 Piller F, Reichwald R, Tseng M (2006) Competitive advantage through customer centric enterprises. International Journal of Mass Customisation 1:157–165 Pine B (1993) Mass customization: The new frontier in business competition. Harvard Business School Press, Boston, MA Pine B, Gilmore J (2007) Authenticity what consumers really want. Harvard Business School Press, Boston, MA Pollard D, Chuo S, Lee B (2008) Strategies For Mass Customization. Journal of Business & Economics Research 6:77–86
Modular Product Architecture: The Role of Information Exchange for Customization
211
Salvador F, Forza C, Rungtusanatham M (2002) Modularity, product variety, production volume, and component sourcing: Theorizing beyond generic prescriptions. Journal of Operations Management 20:549–575 Sanchez R, Mahoney J (1996) Modularity, flexibility, and knowledge management in product and organization design. Strategic Management Journal 17:63–76 Shamsuzzoha A (2009) Restructuring design processes for better information exchange. International Journal of Management and Enterprise Development 7:299–313 Shamsuzzoha A (2010) Modular product development for mass customization. PhD thesis, University of Vaasa, Finland Simon H (1962) The architecture of complexity. Proceedings of the American Philosophical Society 106:467–482 Simpson T (2004) Product platform design and customization: Status and promise. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 18:3– 20 Slater S, Mohr J (2006) Successful development and commercialization of technological innovation: Insights based on strategy type. Journal of Product Innovation Management 23:26–33 S¨oderquist K, Nellore R (2000) Information systems in fast cycle development: Identifying user needs in integrated automotive component development. R & D Management 30:199–212 Staudenmayer N, Tripsas M, Tucci C (2005) Interfirm modularity and its implications for product development. Journal of Product Innovation Management 22:303–321 Steward AD (1981) The design structure system: A method for managing the design of complex systems. IEEE Transactions on Software Engineering 28:71–74 Su J, Chang Y, Ferguson M (2005) Evaluation of postponement structures to accommodate mass customization. Journal of Operations Management 23:305–318 Thevenot H, Simpson T (2007) A comprehensive metric for evaluating component commonality in a product family. Journal of Engineering Design 18:577–598 Ulrich K (1995) The role of product architecture in the manufacturing firm. Research Policy 24:419–440 Ulrich K, Eppinger S, et al (1995) Product design and development. McGraw-Hill New York Vickers P (1983) Common problems of documentary information transfer, storage and retrieval in industrial organizations. Journal of Documentation 39:217–229 Vickers P (1984) Promoting the concept of information management within organisations. Journal of Information Science 9:123–127 Yang B, Burns N, Backhouse C (2004) Postponement: A review and an integrated framework. International Journal of Operations and Production Management 24:468–487 Yassine A (2004) An introduction to modeling and analyzing complex product development processes using the design structure matrix (DSM) method. Quaderni di Management (Italian Management Review) 9
212
AHM Shamsuzzoha and Petri T. Helo
Yassine A, Falkenburg D (1999) A framework for design process specifications management. Journal of Engineering Design 10:223–234 Yassine A, Kim K, Roemer T, Holweg M (2004) Investigating the role of IT in customized product design. Production Planning & Control 15:422–434 Yassine A, Sreenivas R, Zhu J (2008) Managing the exchange of information in product development. European Journal of Operational Research 184:311–326 Zacharias N, Yassine A (2008) Optimal platform investment for product family design. Journal of Intelligent Manufacturing 19:131–148
Part VI
Supply Chain Management
The Impact of Technological Change and OIPs on Lead Time Reduction Krisztina Demeter and Zsolt Matyusz
Abstract The paper examines the effect of technological change on operations improvement programs (OIPs) and operational performance. Previous studies in the field of operations management seldom took into account how the technological change rate of the industry affects the use of manufacturing programs and the resulting performance despite the importance of technological innovation in manufacturing. Our goal was to confirm empirically that plants with different level of achieved lead time reduction used certain OIPs to a different extent. We also analyzed how the technological change rate of the industry affects the use of OIPs. We used ANOVA and multiway ANOVA to confirm our hypotheses. Data were acquired through the fifth round of the International Manufacturing Strategy Survey (IMSS), which contains more than 550 companies from the ISIC sectors 28–35. We found that companies with effective lead time reduction implemented OIPs to a greater extent, and also process technology changes have a higher impact on the use of OIPs than product technology changes. Key words: lead time reduction, technological change, operations improvement programs
1 Introduction This paper examines the effect of an environmental factor, technological change on operations management practices and operational performance. Environment is considered an essential factor in some areas of economics – such as business policy, organizational theory or strategic management – for more than half century now. It Krisztina Demeter (B) and Zsolt Matyusz Corvinus University of Budapest, Hungary Department of Logistics and Supply Chain Management, Hungary, e-mail:
[email protected] Zsolt Matyusz e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 15,
215
216
Krisztina Demeter and Zsolt Matyusz
is therefore quite intriguing why the field of operations management has not paid enough attention to the environment until recently, although the environment and technological changes in particular might have a great impact on how operations work and perform. The link between manufacturing programs and operational performance, and some affecting factors, e.g. the effect of manufacturing strategy, size (Cagliano and Spina, 2000; Demeter and Matyusz, 2008) are well studied in the literature, both theoretically and empirically. However, previous studies in the field of operations management seldom took into account how the technological change rate of the industry affects the use of manufacturing programs and the resulting performance despite the importance of technological innovation in manufacturing (Sousa and Voss, 2008). One has to look into adjacent fields to gain more insight into this problem (e.g. Angel and Engstrom, 1995; Raymond et al, 1996; Sirilli and Evangelista, 1998). Operational performance is a multidimensional term that usually includes issues of costs, quality, and time. In this paper we focus on the latter, as it seems to become more and more important nowadays (just see the representatives of time based competition, such as Stalk, 1988; Blackburn, 1991). According to Suri (1998) one principle of quick response manufacturing is to stick to measuring and rewarding the reduction in total lead time, which should include components of procurement, production, sales lead times, as well as the time to market. These components also reflect the fact that quick response manufacturing works at its best if it is applied to the whole organization, not just to manufacturing. We assume that the implementation of certain manufacturing improvement programs (e.g. process control, quality improvement, product development, automation) should significantly affect the achieved lead time reduction of the plant. Our goal is to confirm empirically that plants with different level of achieved lead time reduction used certain operations improvement programs (OIPs) to a different extent. We also analyze how the technological change rate of the industry affects the use of OIPs. The paper is structured as follows. First we discuss how various OIPs affect lead time reduction. Then we review the relevant literature of the role of environment and technological change in the past and nowadays and develop our research model. After presenting our international database we make the analyses. Finally the results are discussed and the conclusions are drawn.
2 Literature Review First we have a look at the literature that affects the relationship between OIPs and lead time reduction. Then the impact of technological change on operations improvement, especially on lead times, is discussed.
2.1 Relationship Between OIPs and Lead Time Reduction There are several improvement programs in operations that might affect lead times by improving the resources (technologies and people) or improving the processes of how to manage physical operations and connecting informational processes.
The Impact of Technological Change and OIPs on Lead Time Reduction
217
Better technologies can have a particularly important impact on lead times. Automation, robotization, flexible manufacturing systems (Mehrabi et al, 2002), for example, can make the manufacturing lead time very reliable since the machines can produce every single product in the same pace. They can also improve other typical operational measures, such as unit cost due to higher productivity, quality due to less mistakes and higher precision, and even flexibility depending on the features of the applied technology (Mehrabi et al, 2002). However, irrespective of the positive impacts on performance measures, high investment costs, high risks, and additional problems of training, maintenance, machine reliability, reconfigurability (Mehrabi et al, 2002) associated with technology investments, can prevent companies, especially smaller ones from justifying them (Hyland et al, 2004). Although more flexible machinery with computer control can support smaller lot sizes, thus mass customization, they are still very expensive for smaller companies and need special experts to handle them. Human resource programs also can have a positive impact. People can improve their own processes by eliminating waste which automatically reduce lead times and increase transparency. This can be achieved by organizing team work, by training, by giving more autonomy to people. This in itself can increase intrinsic motivation (De Treville and Antonakis, 2006), which can be further enhanced by incentives. Motivated people will work faster and reduce required operation time gradually using continuous improvement programs. More transparent production processes reduce work in process inventories and thus the time that various materials spend in the system. Thus streamlined processes, headed by lean transformations, also impact lead times positively (Ward and Zhou, 2006). Production control can also reduce the level of inventories. If workstations are connected and succeeding stations give the signal for preceding stations to work, then inventories vary between tight limits and thus they run through the system quickly. This pull system is usually organized by kanban cards between workstations and delivery milkruns between the warehouse and various workstations. ERP, and other computer based systems can also enhance the efficiency of information flows (Cooper and Zmud, 1990), and they can further enhance the achievements of lean improvements (Ward and Zhou, 2006), for example by quick data analysis and fast feedback. Information is an excellent substitute of inventories as well, if we want to handle system uncertainties.
2.2 The Rate of Technological Change According to Bourgeois III (1980), environment has two main attributes, complexity and dynamism. Complexity refers to the number and diversity of external factors facing the organization, while dynamism shows the degree of change in these factors. These measures are similar to those used before in the business policy literature. The rate of technological change in the industry can be treated as part of environmental dynamism.
218
Krisztina Demeter and Zsolt Matyusz
Very fast technological change is not necessarily good for companies. The main question is if the investment will return financially. This depends on the utilization level of the new technology and on the period during which it is used. Enough products have to be sold to be able to justify the investment, which can happen if we produce large volumes in a short period or smaller volumes in longer periods. The latter is more risky if the technology change is fast. It means that large mass producers with concentrated high volume production can have an advantage, the payoff of new technology implementations is higher and thus it more usually happens in large companies than in smaller ones (Hyland et al, 2004). Smaller companies have to buy more flexible, reconfigurable machinery to be able to use for several products in parallel or one after the other (through the years). But flexible technology is even more expensive and it can become obsolete very fast. The new technology can provide advantages: it can increase productivity, quality, reliability, and can also reduce operation time. So if the company invests faster than its competitors, then the investment can also return better and faster. Thus fast technology changes can result in fast position changes in the market and the company can produce higher volumes. However, new technology always means higher risk. Therefore Toyota, for example is very conservative in its investments, waiting for others to identify problems which can be eliminated by the technology provider (Liker, 2004). Fast technology change can have a double effect on lead times. On the one hand, better technology in itself can reduce lead times so lead time performance will improve. On the other hand, technology is operated by people who have to learn how to work with it. So there is a learning curve which can have a negative effect on lead times for the short run, but can improve fast later on. If the changes are very fast, then there can be a lot of drawbacks in lead time performance which can also impact the quality and performance of other management programs, as well.
3 Research Questions As we mentioned in the literature review, there are several improvement programs in operations that might affect lead times. We assume that those firms that managed to reduce their lead time effectively, used OIPs (OIP) to a greater extent than those firms where the lead time remained almost the same or became worse. In order to address this assumption we formulate the following hypothesis.
3.1 Hypothesis 1: Companies with Effective Lead Time Reduction Implemented OIPs to a Greater Extent But the relationship of OIPs and lead time reduction can be also affected by the rate of change in technology. Technology changes basically for two purposes: (a) the manufacturing/logistics process is to be improved or (b) the product itself changes
The Impact of Technological Change and OIPs on Lead Time Reduction
219
(the old one becomes obsolete and/or new products are introduced), which requires smaller or larger modifications in the used technologies. Clearly, the process technology change affect the manufacturing system more dramatically: people have to learn how to use the new technology; product and information flows and rules can change; maintenance and cleaning, the way of work, practically everything around workers will change. If the product changes, that change is not necessarily have such a high impact, especially if workers are used to these changes. They need some hours for training to learn the features of the new product, but they can use the same or slightly modified technology so their procedures will not really change. So our next hypothesis is:
3.2 Hypothesis 2: Process Technology Changes Have a Higher Impact on the Use of OIPs than Product Changes As we discussed, size is an important factor in operations improvement program decisions. Although we do not place particular emphasis on size in this paper, we control for its effect on the use of OIPs. Our model is summarized in Fig. 1.
Rate of change in process technologies H2
Operations improvement programs
H1
Lead time reduction
Rate of change in product technologies Size Fig. 1 The research model
4 Survey Data We have used IMSS (International Manufacturing Strategy Survey) data for our analyses. IMSS is a global network of researchers with the objective to study international manufacturing strategies, their implementation and resulting performances in operations and related areas, such as supply chain management and new product development. IMSS was initiated by Chris Voss (London Business School, UK) and Per Lindberg (Chalmers University of Technology, Sweden) in 1992. Since that time, five rounds of the survey have been completed. In IMSS data are collected by national research groups using a standard questionnaire developed by a panel of experts, exploiting the previous editions of the research. The questionnaire is translated, if needed, for local languages by OM professors. Although there is a suggested method of collecting data (focus on better
220
Krisztina Demeter and Zsolt Matyusz
companies, search companies by mail and/or phone, send out the questionnaire to contact person, one per company, usually a plant or manufacturing manager in printed form, follow up to help and inspire contact person to fill in the questionnaire), it is up to the national research team to make decisions on this procedure. However, research teams have to provide data about the sampling procedure to the global network. For further details of the survey see the summary book of IMSS-I (Lindberg et al, 1998) or some articles which used previous rounds of the survey (e.g. Frohlich and Westbrook, 2001; Acur et al, 2003; Husseini and O’Brien, 2004; Laugen et al, 2005; Cagliano et al, 2006). IMSS-V data bank, the one we use in this paper, extends to 561 valid observations from 17 countries (mainly from Europe but also from all other continents except Africa) from 2009. The survey focused on ISIC 28–35: manufacture of fabricated metal products, machinery and equipment. The industry and country characteristics of the database can be seen in Tables 1 and 2. Table 1 Industry distribution of the sample Manufacture of . . . fabricated metal products machinery and equipment office, accounting and computing machinery electrical machinery and apparatus radio, television and communication equipment and apparatus medical, precision and optical instruments, watches and clocks motor vehicles, trailers and semi-trailers other transport equipment Missing
Valid answers 190 154 8 71 30 33 33 22 20
Table 2 Geographic distribution of the sample Countries Belgium Canada China Denmark Estonia Germany
Observations 36 19 59 18 27 38
Countries Hungary Ireland Italy Japan Mexico Netherlands
Observations 71 6 56 28 17 51
Countries Portugal Spain Switzerland Taiwan UK Total average
Observations 10 39 31 31 25 33
5 Operationalization of the Model Variables 5.1 Lead Time Reduction As we mentioned earlier, we measured operational performance improvement with the success in lead time reduction. We created a construct that consists of 4 variables
The Impact of Technological Change and OIPs on Lead Time Reduction
221
related to lead time (procurement lead time, manufacturing lead time, delivery speed, time to market). Each of these variables was measured on a 5-point Likertscale. We tested the construct for uni-dimensionality and the Alpha is over the 0.6 threshold (Cronbach’s Alpha = 0.789). For the exact formulation of the survey questions please refer to Q1 in the Appendix. The value of the newly created construct is the mean of the 4 variables, hence it can be between 1 and 5 points.
5.2 OIPs There are many OIPs throughout the survey that can be related to different functions of the firm. Namely, there are human resource practices (delegation and knowledge of the workforce, lean organization model, continuous improvement programs, workforce flexibility), process control practices (process focus, pull production), technology practices (process automation, flexible manufacturing/assembly systems, tracking and tracing, information sharing and process control), quality practices (quality improvement, equipment productivity, measurement systems, product quality along the supply chain) and product development practices (design integration, organizational integration, technological integration). Each of these variables was measured on a 5-point Likert-scale (Indicate degree of the following action programmes undertaken over the last three years (1-none, 5-high). For the exact formulation of the survey questions please refer to Q2 in the Appendix.
5.3 Rate of Technological Change There is a separate block of questions (consisting of 4 variables) that characterizes technological change in the firm’s business. We created two constructs. The first construct (process technology change) consists of two variables that measure the technological change in 1) the logistics processes and in 2) the core production processes. The second construct (product technology change) also consists of two variables that measure the change in 1) the obsolation of products and in 2) the introduction of new products. For the exact formulation of the survey questions please refer to Q3 in the Appendix. The values of the newly created constructs are the mean of the 2–2 variables, hence they can be between 1 and 5 points.
5.4 SMEs and Large Companies In order to control the effect of size on operations practices, we divided the sample by size measure(s). On the basis of EU regulations, companies with less than 250 employees and with less than 50 million Euro revenue are considered SMEs,
222
Krisztina Demeter and Zsolt Matyusz
while those with more employees and higher revenues are large companies. As we have almost full data for the number of employees, and much less data available for sales (85 missing), we used the number of employees to separate SMEs and large companies. We chose size as a control variable because there is a great amount of previous research that confirms the effect of company size on the use of operations practices (see e.g. Sousa and Voss, 2008, for an exhausting list of studies), i.e. bigger firms are more inclined to invest in operations practices and implement them to a greater extent. The sample is balanced in terms of company size (48.4% of the companies are SME and 51.6% are large).
5.5 Handling Common Method Bias Common method variance (that is attributable to the measurement method rather than for the constructs the measures represent) may be a potential problem for researches (Podsakoff et al, 2003). Therefore, we collected some procedural and statistical arguments to show that the possible extent of common method bias has no significance with respect to our research. First, the design of the IMSS questionnaire helps to control common method bias. The variables used in the analysis came from three different sections of the questionnaire (business performance variables from Section A, operational performance variables from Section B and OM action program variables from Section C). Even though only one respondent participated in the questionnaire in every company, the relevant questions with the key variables were well separated from each other, which reduced the respondent’s possible motivation to use previous answers/responses in order to fill in the gaps. Second, we applied Harman’s single-factor test as a statistical support tool for assessing common method bias. Following Podsakoff et al (2003) we loaded all variables in our model into an exploratory factor analysis and examined the unrotated factor solution in order to determine the number of factors that are necessary to account for the variance in the variables. The result was nine factors with eigenvalue over 1.0 and the first factor had only 28% percent contribution to the variance explained. This indicates that the possible amount of common method variance is not big enough to bias our analysis. Even though Harman’s test is quite insensitive, together with the procedural argument we believe that common method bias was not relevant during our research.
6 Analysis and Results The analysis consists of two main steps. First, we examine whether there is a relationship between the use of OIPs and lead time reduction or not. Second, we split the sample and investigate those companies that were successful in lead time reduction. In this sub-sample we analyze the effect of technological change on the use of
The Impact of Technological Change and OIPs on Lead Time Reduction
223
OIPs while controlling for company size. We expect that technological change has a significant effect on several OIPs.
6.1 The Use of OIPs and Lead Time Reduction (H1) We divided the sample into two sub-samples based on the success of the companies in terms of lead time reduction. The first group consisted of firms where the ‘Lead time reduction’ construct had a value of 2.75 or lower. This means that these companies were not able to improve their lead time significantly or the lead time became worse. 226 companies were in this group. The second group consisted of firms where the ‘Lead time reduction’ construct had a value of 3.25 or higher. This means that these companies improved their lead time significantly. 198 companies were in this group. Companies with a construct value of 2.75–3.25 were omitted in order to create a clear separation between the two groups. We compared the use of OIPs in the two groups by ANOVA to test Hypothesis 1. According to the results the difference in the use of OIPs between the groups was always significant on a 5% level, with three exceptions. These are work force flexibility (p = 0.058), globalization of sourcing (p = 0.103) and globalization of sales (p = 0.080). Thus Hypothesis 1 is supported. Those firms that achieved significant lead time reduction indeed use OIPs to a greater extent.
6.2 The Effect of Technological Change on the Use of OIPs (H2) In this step we continued examining those 198 firms that were successful in significant lead time reduction. At this phase we neglected the other group that performed badly in terms of lead time reduction. We investigated the effect of technological change on the use of OIPs while controlling for size. We broke the analysis into two parts. First, we took a look at the effect of process technology change and size, then at the effect of product technology change and size on the use of OIPs. Based on the two technology change constructs (process and product technology change) we divided the sub-sample further. In the case of process technology change a construct value of 2.5 or lower indicated that processes changed more slowly than average in these firms’ environments (63 firms were in this group). A construct value of 3.5 or more indicated that processes changed faster than average (81 firms were in this group and there were 54 missing values). In the case of product technology change, a construct value of 2.5 or lower indicated that products became obsolete or new products were introduced more rarely than the average in these firms environments (52 firms were in this group). A construct value of 3.5 or higher indicated that products became obsolete or new products were introduced more often than the average (103 firms were in this group and there were 43 missing values).
224
Krisztina Demeter and Zsolt Matyusz
We used multiway ANOVA in order to explore the effect of independent variables (in our case technological change and size) on a single dependent variable (one OIP at a time) and also to measure the interaction between the independent variables. Table 3 shows the results. Significant results at p = 0.05 level are indicated by bold characters. Table 3 Effects of technology change and size on OIPs Process Operations Levene technology improvement test change program Sign. Eta Delegation and 0.126 0.019 0.039 knowledge Lean org. 0.202 0.023 0.037 model CI programs 0.636 0.125 0.017 Workforce flex 0.413 0.024 0.036 Process focus 0.216 0.000 0.096 Pull production 0.137 0.007 0.052 Process aut. 0.564 0.339 0.007 FMS/FAS 0.640 0.006 0.056 Tracking 0.098 0.146 0.015 Inf. sharing 0.840 0.043 0.031 Quality 0.974 0.008 0.050 improvement Equipment 0.309 0.019 0.039 productivity Measurement 0.641 0.042 0.030 systems Design 0.315 0.000 0.139 integration Organizational 0.369 0.006 0.055 integration Technological 0.894 0.000 0.090 integration
Size Sign.
Levene test
Eta
Product technology change Sign. Eta.
Size Sign.
Eta
0.576 0.002
0.592
0.005 0.053 0.437 0.004
0.151 0.015
0.267
0.038 0.029 0.221
0.10
0.001 0.138 0.007 0.007 0.022 0.110 0.006 0.002
0.079 0.016 0.051 0.052 0.038 0.019 0.053 0.072
0.392 0.315 0.581 0.145 0.186 0.964 0.204 0.304
0.122 0.527 0.170 0.730 0.190 0.041 0.199 0.152
0.092 0.005 0.021 0.069 0.011 0.014 0.033 0.040
0.005 0.055
0.466
0.014 0.041 0.053 0.026
0.023 0.037
0.463
0.006 0.052 0.191 0.012
0.016 0.041
0.610
0.000 0.086 0.520 0.003
0.010 0.049
0.048
0.001 0.076 0.594 0.002
0.101 0.020
0.467
0.073 0.023 0.747 0.001
0.026 0.037
0.616
0.002 0.066 0.235 0.010
0.016 0.003 0.013 0.001 0.012 0.029 0.011 0.015
0.000 0.406 0.081 0.001 0.213 0.164 0.029 0.017
Levene’s test examines the homogenity of variance of the dependent variable. In order to not bias the F-test, the standard deviation of the dependent variable should be the same across the different levels of the independent variables. As we can see from Table 3, with one exception Leven’s test is not significant, which means that the F-tests are not biased by homoscedasticity. The interaction of the independent variables were also not significant (there was only one exception), which means that the OIPs are not affected by the interaction of technological change and size.
The Impact of Technological Change and OIPs on Lead Time Reduction
225
If we look at the effects of process technology change and size on OIPs, we can see the following. If the rate of process technology change is faster, firms use most OIPs to a greater extent. There are only a bunch of exceptions (3 out of 16). It is also worth noting that size has a significant effect too. The bigger the company, the more it uses these OIPs (11 out of 16). We can differentiate between the different types of improvement programs. The use of human resource programs are mostly affected by the rate of process technology change. In the case of process control programs, quality programs and product development programs, both the rate of technological change and company size are important and have significant effects. In case of technology programs, size seems to be the more influential contingency factor. The ‘Eta’ column shows the partial eta square values. Not surprisingly these values are not so high, which means that there are other factors, which can explain the extent of use of improvement programs. We find the highest values among the product development programs. In these cases the explaining power of the rate of process technological change is quite high compared to the other programs. If we look at the effects of product technology change and size on OIPs, we encounter a different picture. If the rate of product technology change is faster (i.e. products become obsolete faster and new products are introduced more often), firms use fewer OIPs to a greater extent (8 relationships out of 16 are significant). Size does not have a really significant effect. Only 4 relationships out of 16 are significant. This means that big and small companies rarely use OIPs to a different extent if the product technology rate is different. Here we can again differentiate between the types of improvement programs. In case of process control programs, product technology change has no effect at all. Quality programs and product development programs are those programs that are sensitive to product technology change. The eta values are again not so high, which means that there are other factors, which can explain the extent of use of improvement programs. We find the highest values among the quality programs. Based on these results Hypothesis 2 is supported.
7 Discussion and Conclusion There are some clear differences between those environments where process technology change is high and those with high product technology change. In the former environments firms used many OIPs to a greater extent and there was also a significant size effect – large companies tend to use OIPs to a greater extent than SMEs. This could indicate that adaptation to dynamic process technology environments requires a great effort from the competing firms, especially from large firms. On the other hand, proper adaptation to dynamic product technology environments seems to be a bit easier to manage. Competing firms need to focus their efforts on fewer OIPs, though the emphasis is a bit different. High rate of product technology change demands more focus on quality than high rate of process technology change, so quality practices are the most important ones in these environments. Process control OIPs are essential in dynamic process technology environments, but not in dynamic prod-
226
Krisztina Demeter and Zsolt Matyusz
uct technology environments. We also investigated whether there is a relationship between the two types of environments, i.e. a more dynamic process technology environment indicates a more dynamic product technology environment or vice versa. Out of the 198 companies 120 provided valid answers and we have missing data in the case of 78 companies. Table 4 shows the distribution of the companies. Table 4 Number of companies competing in different environments
Process technology change rate Total
Low High
Product technology change rate Low High 26 30 11 53 37
83
Total 56 64 120
As we can see, half of the companies (56) operate in non-dynamic process technology environments, while the other half (64) in a dynamic one. Non-dynamic process technology environments do not have a clear relationship with product technology environments – again, half of its firms (26) compete in a non-dynamic product technology environment, while the other half (30) in a dynamic environment. In the case of dynamic process technology environments, we face another situation. Firms in dynamic process technology environments more probably operate in dynamic product technology environments (53). Besides implementing a wide range of OIPs, these firms also need to put extra emphasis on quality practices in order to be more successful. The value of Cramer’s V index is 0.316 and significant at p = 0.001 level. The strength of the relationship comes from the relationship between dynamic process technology environments and product technology environments. Our research addressed a practical and relevant problem. From business perspective it may add valuable insight to the firm about the industrial effects of technological change. We saw that there are significant differences between dynamic process and product technology environments which need different treatment. From the perspective of operations management the paper investigates a problem which is not well covered in the literature. We think that our research, which is based on an international manufacturing survey, partially fills this existing gap. Acknowledgements This research was supported by the Hungarian Scientific Research Fund (OTKA T 76233) and the J´anos Bolyai Research Fellowship Program.
The Impact of Technological Change and OIPs on Lead Time Reduction
227
Appendix Q1. How has your operational performance changed over the last three years? Compared to three years ago the indicator has deteriorated more than 10% 1
stayed about the same −5%/+5% 2
improved
improved
10%–30% 3
30%–50% 4
improved more than 50% 5 Time to market Delivery speed Manufacturing lead time Procurement lead time
Q2. Indicate the effort put into implementing the following action programs in the last three years. Effort in the last three years None (1) High (5) Increasing the level of delegation and knowledge of your workforce (e.g. empowerment, training, autonomous teams) Implementing the lean organization model by e.g. reducing the number of levels and broadening the span of control Implementing continuous improvement programs through systematic initiatives (e.g. kaizen, improvement teams) Increasing the level of workforce flexibility following your business unit’s competitive strategy (e.g. temporary workers, part time, job sharing, variable working hours) Restructuring manufacturing processes and layout to obtain process focus and streamlining (e.g. reorganize plant-within -a-plant; cellular layout) Undertaking actions to implement pull production (e.g. reducing batches, setup time, using kanban systems) Engaging in process automation programs (e.g. automated parts loading/unloading, automated guided vehicles, automated storage systems) Engaging in flexible manufacturing/assembly systems – cells programs (FMS/FAS/FMC) Engaging in product/part tracking and tracing programs (bar codes, RFID) Implementing ICT supporting information sharing and process control in production Quality improvement and control (e.g. TQM programs, six sigma projects, quality circles) Improving equipment productivity (e.g. Total Productive Maintenance programs)
228
Krisztina Demeter and Zsolt Matyusz Utilizing better measurement systems for self-assessment and benchmarking purposes Increasing the control of product quality along the supply chain (raw materials and components certification, supplier audit, product integrity in distribution, etc.) Increasing design integration between product development and manufacturing through e.g. platform design, standardization and modularization, design for manufacturing, design for assembly Increasing the organizational integration between product development and manufacturing through e.g. teamwork, job rotation and co-location Increasing the technological integration between product development and manufacturing through e.g. CAD-CAM, CAPP, CAE, Product Lifecycle Management Rethinking and restructuring supply strategy and the organization and management of supplier portfolio through e.g. tiered networks, bundled outsourcing, and supply base reduction Implementing supplier development and vendor rating programs Increasing the level of coordination of planning decisions and flow of goods with suppliers including dedicated investments (e.g. information systems, dedicated capacity/tools/ equipment, dedicated workforce) Rethinking and restructuring distribution strategy in order to change the level of intermediation (e.g. using direct selling, demand aggregators, multiechelon chains) Increasing the level of coordination of planning decisions and flow of goods with customers including dedicated investments (e.g. information systems, dedicated capacity/tools/ equipment, dedicated workforce) Implementing supply chain risk management practices including early warning system, effective contingency programs for possible supply chain disruptions Increasing the level of globalization of the production network (i.e. shifting production activities to off-shored plants) Increasing the level of globalization of sourcing Increasing the level of globalization of sales Increasing the level of globalization in product design and new component parts development
Q3. Please indicate what characterizes technological change in your business: Logistic processes change Slowly (1) (5) Rapidly Core production processes change Slowly (1) (5) Rapidly Products become obsolete Hardly ever (1) (5) Frequently New product are introduced Hardly ever (1) (5) Frequently
The Impact of Technological Change and OIPs on Lead Time Reduction
229
References Acur N, Gertsen F, Sun H, Frick J (2003) The formalisation of manufacturing strategy and its influence on the relationship between competitive objectives, improvement goals, and action plans. International Journal of Operations & Production Management 23(10):1114–1141 Angel D, Engstrom J (1995) Manufacturing systems and technological change: The US personal computer industry. Economic Geography 71(1):79–102 Blackburn J (1991) Time-based competition: the next battleground in American manufacturing. Business One Irwin, Homewood, IL Bourgeois III L (1980) Strategy and environment: A conceptual integration. Academy of Management Review 5(1):25–39 Cagliano R, Spina G (2000) How improvement programmes of manufacturing are selected The role of strategic priorities and past experience. International Journal of Operations & Production Management 20(7):772 Cagliano R, Caniato F, Spina G (2006) The linkage between supply chain integration and manufacturing improvement programmes. International Journal of Operations and Production Management 26(3):282–299 Cooper R, Zmud R (1990) Information technology implementation research: a technological diffusion approach. Management Science 36(2):123–139 De Treville S, Antonakis J (2006) Could lean production job design be intrinsically motivating? Contextual, configurational, and levels-of-analysis issues. Journal of Operations Management 24(2):99–123 Demeter K, Matyusz Z (2008) The impact of size on manufacturing practices and performance. In: EurOMA Conference, Groningen Frohlich M, Westbrook R (2001) Arcs of integration: an international study of supply chain strategies. Journal of Operations Management 19(2):185–200 Husseini S, O’Brien C (2004) Strategic implications of manufacturing performance comparisons for newly industrialising countries. International Journal of Operations and Production Management 24(11):1126–1148 Hyland P, Kennedy J, Mellor R (2004) Company size and the adoption of manufacturing technology. Journal of New Business Ideas and Trends 2(1):66–74 Laugen B, Acur N, Boer H, Frick J (2005) Best manufacturing practices. International Journal of Operations & Production Management 25(2):131–150 Liker J (2004) The Toyota way: 14 management principles from the world’s greatest manufacturer. McGraw-Hill Professional, USA Lindberg P, Voss C, Blackmon K (1998) International manufacturing strategies: context, content, and change. Kluwer Academic Pub, Netherlands Mehrabi M, Ulsoy A, Koren Y, Heytler P (2002) Trends and perspectives in flexible and reconfigurable manufacturing systems. Journal of Intelligent manufacturing 13(2):135–146 Podsakoff P, MacKenzie S, Lee J, Podsakoff N (2003) Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of applied psychology 88(5):879–903
230
Krisztina Demeter and Zsolt Matyusz
Raymond L, Julien P, Carriere J, Lachance R (1996) Managing technological change in manufacturing SMEs: a multiple case analysis. International Journal of Technology Management 11(3):270–285 Sirilli G, Evangelista R (1998) Technological innovation in services and manufacturing: results from Italian surveys. Research Policy 27(9):881–899 Sousa R, Voss C (2008) Contingency research in operations management practices. Journal of Operations Management 26(6):697–713 Stalk G (1988) Time–the next source of competitive advantage. Harvard Business Review 66(4):41–51 Suri R (1998) Quick response manufacturing. Productivity Press Portland, OR, New York, NY, USA Ward P, Zhou H (2006) Impact of Information Technology Integration and Lean/Just-In-Time Practices on Lead-Time Performance*. Decision Sciences 37(2):177–203
Global Supply Chain Management and Delivery Performance: a Contingent Perspective Ruggero Golini and Matteo Kalchschmidt
Abstract Globalization is an ever growing phenomenon. Especially for what concerns sales and distribution activities, globalization offers the possibility, even to smaller companies, to access to new markets and, in turn, to increase scale and scope of operations. However there are some pitfalls, such as higher lead times induced by longer distances. This can be critical in a competitive environment that stresses more and more the importance of customer responsiveness. Literature and practice provide evidence of potential leverages, such as supply chain management investments, that can help companies to keep a high responsiveness even in a global environment. However different companies may face some specificities linked to product and process characteristics, complexity and uncertainty that can change the impact of globalization on a company’s lead time performance and the need or the effectiveness of supply chain investments. The aim of this paper is to put light on the effect of these contingencies and to identify strategic patterns followed by companies dealing with different contingent situations.
1 Introduction Globalization has continuously grown in the last three decades thanks to several factors, such as transportation and logistics development, ICT diffusion, trade agreements among countries, and cultural factors (Knight and Cavusgil, 2004; H¨ulsmann et al, 2008). As a matter of fact in the manufacturing industry, world trade has grown
Ruggero Golini (B) and Matteo Kalchschmidt Department of Economics and Technology Management, Universit`a degli Studi di Bergamo, Viale Marconi 5, 24044 Dalmine (BG), Italy, e-mail:
[email protected] Matteo Kalchschmidt e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 16,
231
232
Ruggero Golini and Matteo Kalchschmidt
at an average 5% pace in the last twenty five years (WTO, 2008). This growth in globalization has motivated both practitioner and academic interest in global supply chain management (Prasad and Babbar, 2000; Lowson, 2003; Nair and Closs, 2006). Global supply chain management (SCM) can be approached from different perspectives, analyzing sourcing, manufacturing and distribution processes. Usually companies find it easier to globalize sales, then sourcing and then manufacturing activities (Cagliano et al, 2008; Johanson and Vahlne, 1990). Global markets are in fact reachable through intermediaries and logistic companies, while opening new facilities abroad dramatically increases the complexity of the operations. However, even if global markets are nowadays “closer”, still there are some open issues and pitfalls. Global competition is always more focused on customer responsiveness and longer distances may negatively affect this performance (Meixell and Gargeya, 2005). In order to balance the trade-off between motivators and risks of globalization, companies have developed global SCM strategies. Previous studies have failed to detect significant impacts on general business success of global supply chains (Kotabe and Omura, 1989; Steinle and Schiele, 2008). Golini and Kalchschmidt (2009) provide evidence, in the particular case of global sourcing, that this is due to a complex relationship between globalization, supply chain investments and performance. This work aims to provide a twofold contribution. First of all, attention is here devoted on global distribution i.e. how companies manage their sales and distribution channels globally (e.g., Bello et al, 2004). Thus, similarly to Golini and Kalchschmidt (2009) we aim at analyzing a similar framework but in the case of global distribution. Thus the first goal of the paper is to analyze how delivery performance can be maintained or even improved in a global distribution regime through SCM investments. Secondly, we would like to analyze the contextual factors that influence the relationships among globalization, supply chain investments and performance. In particular, we want to identify to what extent the context where companies operate, influences how global distribution relates to delivery performance. The remainder of the paper is structured as follows: in the next section literature regarding global distribution and SCM is provided. Then research objectives and methodology are described and the results of the empirical research conducted are provided. In the end discussion of these results are detailed and conclusions regarding the implication of this works are provided.
2 Literature Review In the last two decades, customer service and quick response strategies have acquired higher importance in the competitive arena (Hammond, 1991; Lowson et al, 1999; Christopher et al, 2004). To achieve higher responsiveness, sometimes called agility, companies can restructure their production processes, keep higher inventories or invest in SCM (Naylor et al, 1999; Lee, 2004). SCM in particular, i.e. sharing of information and coordinating processes with customers is an effective
Global Supply Chain Management and Delivery Performance: a Contingent Perspective
233
way to enhance customer responsiveness without increasing the level of inventories thus minimizing the whole supply chain costs (e.g., Christopher, 1999; Frohlich and Westbrook, 2001). However when operating in a global environment, investments in SCM can be more difficult to be put in place and longer distances may negatively affect responsiveness, for example in terms of delivery lead times. Several authors stated that global supply chain by definition cannot be fast and seamless (e.g., Levy, 1997; Minner, 2003; Womack and Jones, 1996). “Lean” supply chains usually require short distances to have frequent deliveries and lower inventories. Moreover cultural distances and possible lack of trust among companies can make longer the definition of agreements and the return of SCM investments (Levy, 1997). Moreover, longer distances may require the use of intermediaries and make the number of actors in the value chain higher. This can increase the bullwhip effect especially for companies on the upstream part of the value chain (Lee et al, 1997). All these conditions from one side amplify the negative effect of globalization on customer responsiveness and from the other side can hamper investments in increasing coordination of flows of goods and information with customers abroad. As a reaction, companies may be pushed to achieve responsiveness through traditional methods such as make-to-stock production and higher inventories of finished products, rather than adopting a make-to-order production (Pyke and Cohen, 1990). However there can be postponement – or leagile – strategies that can be effective for global distribution contexts (Naylor et al, 1999; Goldsby and Griffis, 2006; Christopher, 2005; Yang and Burns, 2003; Aitken et al, 2002). Some authors also talk about global just-in-time or some specific SCM investments effective in global contexts (Gunasekaran and Ngai, 2005). For example companies operating in a global environment may invest in rationalizing their distribution strategy and reduce the level of intermediation and, by consequence, the bullwhip effect. This contrasting vision in literature between applicability and effectiveness of SCM investments in global supply chains is probably due to a limited analysis of contextual factors. Determinants of sales globalization have been analyzed from different perspectives: oligopolistic reaction theory, resource-based view, internationalization process theories or the international new venture theory (e.g., Knickerbocker, 1973; Bloodgood et al, 1996; Johanson and Vahlne, 1990; Oviatt and McDougall, 2005). The main determinants for globalization that can affect performance can be categorized into internal and external factors (Nkongolo-Bakenda et al, 2010). Among the external factors there are: competitive pressure (e.g. attractiveness of foreign markets and product innovation), product and market standardization (i.e. customers have similar needs), product complexity and uncertainty (sales, product or process innovation). These factors can push firms to expand their activities internationally (Zou and Cavusgil, 2002; Fujita, 1995; Johnson, 2004). For example, the less complex the product and the easier it is to manage it on an international basis (Westhead et al, 2001). Uncertainty has a twofold effect instead. From one side the shorter the product life cycle the higher the interest in rapidly reaching a global market (Anderson and Gatignon, 1986). On the other side uncertainty can
234
Ruggero Golini and Matteo Kalchschmidt
make globalization more difficult (e.g. in defining agreements with distributors or customers). On the other hand we have management of international competences, size and product/service differentiation as internal factors. Clearly, larger and more experienced companies have an easier access to international markets (Baird et al, 1994; Manolova et al, 2002). Product differentiation acts as a defensive barrier for the company. In fact if competition is much based on price internationalization can be more difficult (Mollenkopf et al, 2010). Actually many of the variables listed above as determinants of internationalization have a relationship with supply chain investments as well. Size is of course related to the financial resources available and scale for investments in supply chain (Lee and Whang, 2000). Next, we can consider product and process complexity (number of components, technological level, number of production stages) that can make communication with customers and establishing partnerships more difficult (Perona and Miragliotta, 2004). Next the higher the uncertainty related to product and processes the lower the willingness of the company to invest in SCM because of the increased risk in the payback of the investment itself. On the other side, in conditions of high uncertainty companies may be pushed towards higher supply chain investments (Van Donk and Van Der Vaart, 2005). Furthermore, we can consider the position of the decoupling point (Naylor et al, 1999). Companies operating in a make to stock regime may easier balance the negative effects of globalization on customer responsiveness through higher inventories. Similarly assembly to order companies may quickly produce a customized product keeping inventories of semi-finished products. However these strategies are not always viable if there is a high degree of customization (identified by engineer to order or make to order production) or if inventories are costly (e.g. space occupation, obsolescence). As we have seen, there are several contingent factors that can affect at the same time the degree of globalization – that should reduce customer responsiveness – and SCM investments – that should increase responsiveness. The relationship between these two dimensions appears to be highly complex and little researched. Because of that we focused our paper on the relationship among globalization, SCM and delivery performance under different contingent situation.
3 Objectives and Methodology Based on the previous literature review, this work aims to analyze the relationship between globalization of sales, investments in supply chain and delivery performance. Specifically this work analyzes this relationship from two different perspectives. First we aim at evaluating whether a relationship between the three mentioned variables exists. Previous contributions have provided evidence that globalization of sales may have relevant impacts on delivery performance, but other contributions have shown that managing supply chain properly can lead to better delivery
Global Supply Chain Management and Delivery Performance: a Contingent Perspective
235
performance if proper investments are made on the supply chain (Bruce et al, 2004; Christopher, 2000; Yang and Burns, 2003; Aitken et al, 2002). Golini and Kalchschmidt (2009) demonstrated in the case of procurement, that global sourcing, supply chain investments and inbound performance are related. This work aims to evaluate whether a similar relationship exists in the case of outbound logistics. Thus our first research question is as follows: RQ1. What kind of relationship exists among global sales, supply chain investments and delivery performance? On a second perspective we want to analyze whether the previous mentioned relationships are somehow influenced by contingency variables. Typically, supply chain investments can be applied differently and with different results according for example to the size of the company: larger companies may leverage on greater resources and thus be able to increase more significantly delivery performance. Other contributions provide evidence that the extent of personalization can have an impact on the company’s capability to sell global and specifically to the impact of globalization on service level (Goldsby and Griffis, 2006; Christopher, 2005). As previously detailed, some authors (Nkongolo-Bakenda et al, 2010) have shown that contingency factors may influence the impact of globalization on companies’ performance. For this reason this work aims to analyze whether the relationships among the mentioned variables are different according to specific contingent variables. In particular, based on the previous literature review, attention is devoted to five contingency variables: size; product and production complexity; market and process uncertainty; position of the decoupling point (as a measure of personalization); position in the supply chain. Thus our second research question is as follows: RQ2. To what extent the relationships among global sales, supply chain investments and delivery performance are influenced by contingent variables (i.e., company’s size, complexity, uncertainty, personalization, position in the supply chain)? In order to investigate the above research questions, data were collected from the fifth edition of the international manufacturing strategy survey (IMSS 5) run in 2009. This project, originally launched by London Business School and Chalmers University of Technology, studies manufacturing and supply chain strategies within the assembly industry (ISIC 28–35 classification) through a detailed questionnaire administered simultaneously in many countries by local research groups. Responses were gathered in a unique global database (Lindberg et al, 1998). The sample is described in Table 1. In particular 485 companies (out of the 677 of the global database) provided information useful for this research; companies are distributed among 19 different countries. Companies are mainly small (50.5% of the sample) but also medium and large companies are represented. Different industrial sectors from the assembly industry are considered, mainly from the manufacturing of fabricated metal products, machinery and equipment. In order to measure the extent of globalization of sales, we collected information regarding the percentage of sales outside the continent where the plant is based and regarding whether the company has increased the use of global sales in the last three years.
236
Ruggero Golini and Matteo Kalchschmidt
Table 1 Descriptive statistics in terms of (a) country, (b) size, (c) industrial sector (ISIC codes) (a) Country Belgium Brazil Canada China Denmark Estonia Germany Hungary Ireland Italy
N 25 29 13 32 13 19 29 59 5 41
(b) Size* Small Medium Large NA Total
N 245 84 156 485 485
% 5.2 6.0 2.7 6.6 2.7 3.9 6.0 12.2 1.0 8.5
Country Japan Mexico Netherlands Portugal Spain Switzerland Taiwan UK USA Total
N 18 11 39 7 27 29 25 9 55 485
% 3.7 2.3 8.0 1.4 5.6 6.0 5.2 1.9 11.3 100
% 50.5 17.3 32.2 100.0 100
(c) ISIC** N % 28 159 32.8 29 135 27.8 30 7 1.4 31 55 11.3 32 24 4.9 33 25 5.2 34 29 6.0 35 19 3.9 NA 32 6.6 Total 485 100 * Size: Small: less than 250 employees, Medium: 251-500 employees, Large: over 501 employees **ISIC Code (Rev. 3.1): 28: Manufacture of fabricated metal products, except machinery and equipment; 29: Manufacture of machinery and equipment not classified elsewhere; 30: Manufacture of office, accounting, and computing machinery; 31: Manufacture of electrical machinery and apparatus not classified elsewhere; 32:Manufacture of radio, television, and communication equipment and apparatus; 33 Manufacture of medical, precision, and optical instruments, watches and clocks; 34: Manufacture of motor vehicles, trailers, and semi-trailers; 35:Manufacture of other transport equipment
Global Supply Chain Management and Delivery Performance: a Contingent Perspective
237
Since we were interested in the impact of global distribution on delivery, we designed a latent variable based on the increase of delivery speed and delivery reliability in the last three years (Cronbach’s alpha 0.791). In order to measure SCM investments we defined a latent variable based on two items that were available in the questionnaire. Companies were asked to provide information regarding the degree of use in the last three years of the following action programs: • rethinking and restructuring distribution strategy in order to change the level of intermediation (e.g. using direct selling, demand aggregators, multi-echelon chains; • increasing the level of coordination of planning decisions and flow of goods with customers including dedicated investments (e.g. information systems, dedicated capacity/tools/ equipment, dedicated workforce). The degree of adoption of SCM investments was measured on a 1–5 Likert scale where 1 represents “no use” and 5 represents “high adoption”. Delivery performance variables were measured on a 1–5 Likert scale where 1 represents deterioration and 5 represents significant improvement. Cronbach’s alpha is 0.750 (higher than 0.6) and factor loads are equal to 0.894 (above 0.6) claiming that reliability is guaranteed. We considered five contingency variables: size; product and production complexity; market and process uncertainty; position of the decoupling point; position in the supply chain. In particular for each contingency we defined specific measures based on the IMSS questionnaire (see the Appendix for the actual questions) and defined two groups for each contingency. For example, we took into consideration company size and we defined a group of small companies (below 250 employees) and another one made of large companies (above 250 employees). Table 2 provides details on the different groups.
Fig. 1 Structural model. Squares are observed variables, ovals latent variables. +/− is the expected impact of one variable on the other. Thin and dotted arrows represent measurement weights (factors) while bold arrows are structural weights (along two paths)
238
Ruggero Golini and Matteo Kalchschmidt
Table 2 Groups definition for the different contingent factors Size Product and production complexity Market and process uncertainty Decoupling point
Position in the supply chain
Group 1 Small Below 250 employees Simple Complexity index* <= 3 Stable Uncertainty index** <=3 ETO/MTO Production mainly based on Engineer or Make to order Upstream Customers are mainly other manufacturers
Group 2 Large Above 250 employees Complex Complexity index* > 3 Uncertain Uncertainty index** > 3 ATO/MTS Production mainly based on Assembly to order or Make to stock Downstream Customers are mainly distributors or end users
* Based on an average of the following 1–5 Likert-scale based items: type of product design (modular or integrated), type of product (component or finished product), number of parts/ components, number of production phases (Cronbach’s alpha = 0.72, Factor loads above 0.56). ** Based on an average of the following 1–5 Likert-scale based items: change rate in logistic processes and production processes, products obsolescence rate, frequency of new product introduction (Cronbach’s alpha = 0.65, Factor loads above 0.6).
From left to right we can identify two paths from globalization investment to delivery performance. They both start from globalization investment (GI) that is the effort companies have put in the last three years to globalize sales and distribution. Following Path 1 we find globalization level (GL) that measures the percentage of sales outside the continent where the plant is based. We expect a positive relationship between these two variables (GI and GL). Finally we have delivery performance (DP). As detailed before, this is another latent variable measured by the increase/decrease of the delivery speed and reliability in the last three years. We expect a negative relationship among GL and DP as the higher the level of globalization the worse the expected delivery performance. Following Path 2 we start from GI and then we find SCM Investment (SCMI) that is the effort made in the last three years for SCM investments. This is a latent variable measured by the two SCM variables described before. We expect a positive relationship with GI, as the investment in globalization usually needs SCM investments as a support. Finally SCMI should have a positive impact on Delivery Performance (DP). Before running the model, we assessed the impact of contingencies on the model variables (GI, GL, SCMI, DP) by measuring differences between groups through an independent sample t-test (Table 3). We can see that several differences can be found when the different contingency variables are considered. The only exception is SC position that is not associated to any difference. After that, we run the model, considering the whole sample and we
Global Supply Chain Management and Delivery Performance: a Contingent Perspective
239
Table 3 Average values for different groups for the model variables (values in bold identify a significant difference among groups with sig. < 0.05) Variable Sample average
GI GL SCMI DP
3.1 15.9 2.5 3.3
Size
Complexity
Uncertainty Decoupling point SC position
Small Large Simple Complex Stable Uncert. ETO /MTO 2.9 3.3 3.1 3.2 3.0 3.3 3.0 12.2 19.6 13.1 18.3 16.1 15.6 14.8 2.3 2.7 2.4 2.6 2.2 2.9 2.5 3.2 3.3 3.3 3.3 3.1 3.4 3.2
ATO /MTS 3.3 17.8 2.6 3.3
Upstr. Downstr 3.1 15.4 2.5 3.3
3.1 16.3 2.5 3.2
found that the model holds (Table 4 provides a summary of the model fit) and that the hypothesized relationships are correct (see Table 6 – default model). Next, we performed a multiple group analysis on the original model to assess differences between groups in the structural weights – the linkages among the main variables. We adopted a procedure similar to that one described in Arbuckle (2005), Cook et al (2006), and Tausch et al (2007). First of all we had to check whether the latent factor structure holds for Group 1 and Group 2 for the each contingent factor. To do this we run our model using separately data of Group 1 and Group 2, but keeping an equality constrain on measurement weights and intercepts between the two groups. We repeated the procedure for the different contingency factors checking models fit (see Table 5). The fit is always good except for supply chain position model that is rejected (even if NFI and RMSEA are acceptable). This means that for all the other models, measurements (or factors) for different groups hold. Table 4 Model fit statistics for the overall model chi-square df Default model 7.11 6
p 0.311
NFI* 0.989
RMSEA** 0.020
* NFI: normed fit index (good above 0.95). ** RMSEA: root mean squared error of approximation (good below 0.05)
Table 5 Model fit for models considering contingencies (these models are constrained on measurement weights and intercepts) Size Complexity Uncertainty Decoupling point SC position
chi-square 17.07 13.98 23.85 19.99 35.31
Df 18 18 18 18 18
p 0.518 0.730 0.160 0.334 0.009
NFI* 0.978 0.985 0.977 0.974 0.983
* NFI: normed fit index (good above 0.95). ** RMSEA: root mean squared error of approximation (good below 0.05).
RMSEA** 0.000 0.000 0.026 0.016 0.046
240
Ruggero Golini and Matteo Kalchschmidt
Using the constrained models we finally compared regression coefficients between groups using critical ratio to establish significant differences. Table 6 summarizes theses results, for reader’s convenience we reported also the average values already shown in Table 3. Table 6 Averages and standardized regression coefficients for the two paths for the default and the contingent models (values underlined are not different from the default model; values in italic are different from the default model, values in bold identify a significant difference among groups) Path 1 Model
Group
Default model Size Complexity Uncertainty
→
Path 2 →
GL
DP
3.1 0.338** 15.9 -0.177** 3.3
GI
→
→
SCMI
DP
3.1 0.326**
2.5
0.518* 3.3
Small
2.9 0.318** 12.2 -0.138
3.2 2.9 0.237*
2.3
0.614* 3.2
Large
3.3 0.323** 19.6 -0.202*
3.3
3.3 0.367**
2.7
0.468* 3.3
Complex
3.1 0.324** 13.1 -0.228** 3.3 3.1 0.306**
2.4
0.616* 3.3
Simple
3.2 0.347** 18.3 -0.089
3.3
3.2 0.373**
2.6
0.374* 3.3
Uncertain
3.0 0.354** 16.1 -0.198*
3.1 3.0 0.425**
2.2
0.566** 3.1
Stable
3.3 0.34** 15.6 -0.163*
3.4
Decoupling point ATO/MTS SC position
GI
3.3 0.207
2.9
0.459
3.4
3.0 0.35** 14.8 -0.332** 3.2 3.0 0.412**
2.5
0.700** 3.2
ETO/MTO
3.3 0.347** 17.8
3.3 0.315**
2.6
0.459* 3.3
Upstream
3.1 0.254** 15.4 -0.093
3.3 3.1 0.356**
2.5
0.527* 3.3
2.5
0.555* 3.2
-0.11
3.3
Downstream 3.1 0.389** 16.3 -0.247** 3.2
3.1 0.34**
GI: Investment in globalization of sales; GL: level of globalization of sales; DP: Delivery Performance; SCMI: investments in supply chain. **: sig. < 0.01 *: sig. < 0.05
Finally we analyzed the total effect of globalization investment (GI) on delivery performance (DP) as the contribution of Path 1 and Path 2 for the default and the models considering contingencies (Table 7). Table 7 Total standardized effect of globalization investment on delivery performance Model Default Size Complexity Uncertainty Decoupling point SC position
Group Small Large Complex Simple Uncertain Stable ATO/MTS ETO/MTO Upstream Downstream
Total effect of GI on DP 0.109 0.107 0.102 0.109 0.115 0.039 0.170 0.106 0.172 0.164 0.093
Global Supply Chain Management and Delivery Performance: a Contingent Perspective
241
4 Results Looking at Tables 6 and 7 we can draw several results. First of all we can see that the default model is significant, both in terms of fit and significance of the relationships: investments in global sales increase the level of globalization that is associated with worse delivery performance (Path 1). On the other side these investments trigger investments in supply chain that make delivery performance better (Path 2). Since the impact of SCM on delivery performance is stronger than the globalization one, the total effect of investment in globalization on delivery performance is positive. Thus, overall, companies that have invested in the globalization of distribution and sales still have a competitive delivery performance. These relationships, however, are influenced by some of the specific contingencies we considered. Size seems to have a significant impact. First of all larger companies tend to invest more in globalization (GI) and SCM thus they have a higher level of globalization (GL). Moreover, for larger companies, the relationship between GI and SCM is stronger than for smaller ones. Interestingly however there is no evidence that for smaller companies globalization level (GL) affects delivery performance (DP): the relationship between GL and DP is not significant when smaller companies are taken into account. When larger companies are considered, on the contrary, the impact on DP is significantly higher than the average (−2.020). Because of that for small and large companies the total effect of GI on DP is similar. Complexity seems to affect only the globalization level. The higher the complexity the lower the degree of globalization. Another interesting effect is that companies dealing with low complexity do not have a significant linkage between GL and DP and the linkage between SCMI and DP is below average. On the other side, a high complexity context implies a stronger negative effect of GL on DP, but a higher effectiveness of SCMI on DP. We can summarize these findings by saying that companies operating in a more complex environment suffer more from globalization, but they do not invest more in SCM than the others partially because their investment appears to be more effective. Because of that the total effect of GI on DP is almost the same for companies characterized by high and low complexity. Uncertainty plays a significant role as well. First of all, companies operating in stable environments tend to invest more in globalization and SCM and they are able obatin more improvements in the delivery performance. However the globalization level is not higher compared to companies that operate in uncertain contexts: this probably means that these companies have started to globalize only recently compared to companies that face a more uncertain environment. Moreover for these companies, Path 2 does not hold: there is no linkage between GI, SCMI and DP. It seems that companies operating in a stable environment do not need to moderate the negative impact of globalization through SCM and their investments are not aimed at delivery performance improvement. In fact, if we look at the total effect, these companies can reach higher delivery performance even with higher level of globalization. On the other side, companies in uncertain environments tend to invest more
242
Ruggero Golini and Matteo Kalchschmidt
in SCM when they globalize and their investment is quite effective. Nevertheless the improvement in the delivery performance is very marginal. Also the position of the decoupling point has a significant role. Companies adopting ETO/MTO models tend to invest more in globalization and their level of globalization is slightly higher (even if not significantly). However globalization does not affect the delivery performance but they invest anyway in SCM, thus improving their delivery performance. Because of that, when globalizing, they are able to significantly improve their performance compared to the rest of the cases (see Table 7 − total effect). On the other hand, ATO/MTS companies show a strong negative effect of globalization on performance. Because of that, their SCM investment is focused on delivery performance improvement and that is why the linkage is so strong (0.700). Thanks to this, they are able to moderate the negative effect of globalization and keep performance aligned with the rest of the sample (Table 7). Finally SC position does not contribute much, it seems only that upstream companies have no negative effect from globalization and because of that they are able to have higher delivery performance even when they globalize.
5 Discussion The previous results provide evidence of a rather complex relationship between the considered variables, i.e. globalization investment, supply chain investment and delivery performance. In particular the conducted analyses show that companies can compensate the negative impacts of globalization by investing on supply chain management and, in the end, can be able to keep performance under control. This phenomenon, however, is significantly influenced by the context in which companies operate. First of all, companies operating in different contexts do not declare significantly different performance: the only exception is when uncertainty is considered, where companies operating in stable environments are capable of achieving better results. In the other situations, however, there is no significant difference. Quite interestingly, however, the considered contingencies highlight several differences in the way in which this performance is achieved. First of all companies operating in different contexts tend to extend their sales globally in different ways. Companies that invest more in globalization are larger firms, operating in stable contexts and with highly customized production systems. Quite interestingly, however, these differences are not completely straightforward when the actual level of globalization is considered: the globalization level is higher for larger firms and for companies operating in less complex environments. We can argue from this result that globalization of sales is an ever growing phenomenon even if companies operating in different contexts have invested differently mainly in terms of timing: larger companies, operating in stable environments have already invested quite significantly in the globalization of their sales network, while companies operating in uncertain contexts, where customization is important are now investing more significantly.
Global Supply Chain Management and Delivery Performance: a Contingent Perspective
243
Similarly, investments in the supply chain differ according to the specific group of companies considered. Larger companies operating in more stable environments declare higher investments in the supply chain. On a second perspective, the relationships between the considered variables change according to the specific group of companies considered. In particular larger companies tend to invest more on the supply chain when globalization increases; this is true also for companies operating in uncertain contexts where globalization can be critical. The position of the decoupling point has a significant impact on the relationships between both globalization and supply chain investments with performance. This result is explained by considering that companies operating in ATO and MTS contexts can find more difficulty in keeping delivery performance under control on a global scale, due for example to a higher complexity of transportation systems. For the same reason, companies operating in ATO and MTS contexts show also more benefits from supply chain investments. In the end our analyses provide significant evidence that support the existence of a strong contingent impact on global supply chain management practices.
6 Conclusions In this work we have analyzed the relationships between globalization of sales, investments on the supply chain and delivery performance. Our results provide evidence of the complex interrelations between these variables and the importance of the context where companies operate in explaining these relationships. This work contributes to current knowledge by showing how companies can limit the potential pitfalls of globalization by leveraging on the supply chain; specifically this approach has been verified as significant in different contexts and for different companies. We argue that this work also contributes to the debate on globalization as a growing phenomenon, since we provide evidence that even if on average companies are still investing in the globalization of their sales network, this investment is different according to different contextual variables. In the end we would like to highlight some of the limitations of this works. First we based our analyses on a specific sample of companies. Even if the IMSS questionnaire is a consolidated one and has been tested several times, still replication of this analysis with different samples would be important to check for conclusions. Moreover we could go deeper in the analysis considering the effect of specific SCM investments and refining the contingency analysis. For example complexity could be split into product and process complexity. Finally we could consider inventory level of finished products as a relevant control factor. These are all feasible further improvements of this work in order to get a clearer perspective on such complex phenomenon. Acknowledgements Partial funding for this research has been provided by the PRIN 2007 fund “La gestione del rischio operativo nella supply chain dei beni di largo consumo”.
244
Ruggero Golini and Matteo Kalchschmidt
Appendix Complexity B2. How would you describe the complexity of the dominant activity? Modular product design Single manufactured components Very few parts/materials, one-line bill of material Very few steps/operations required
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
1
2
3
4
5
Integrated product design Finished assembled products Many parts/materials, complex bill of material Many steps/operations required
Uncertainty A3. Please indicate what characterizes technological change in your business: Logistic processes change Core production processes change Products become obsolete New product are introduced
Slowly Slowly Hardly ever Hardly ever
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
5 5 5 5
Rapidly Rapidly Frequently Frequently
Position of the decoupling point B9. What proportion of your customer orders are (percentages should add up to 100 %): Designed/ engineered to order %
Manufactured to order %
Assembled to order %
Produced to stock %
Total 100 %
SC position 10 SC4. Indicate the percentage of sales in the following categories of customers (your answers should add up to 100%): Manufacturers of subassemblies %
Manufacturers of finished products %
Wholesalers / distributors %
End users %
Total 100 %
Global Supply Chain Management and Delivery Performance: a Contingent Perspective
245
References Aitken J, Christopher M, Towill D (2002) Understanding, implementing and exploiting agility and leanness. International Journal of Logistics Research and Applications 5(1):59–74 Anderson E, Gatignon H (1986) Modes of foreign entry: a transaction cost analysis and propositions. Journal of International Business Studies 17(3):1–26 Arbuckle J (2005) Amos 6.0 user’s guide. SPSS, Chicago Baird L, Lyles M, Orris J (1994) Choice of international strategies for small businesses. J Burdeane Orris 10 Bello D, Lohtia R, Sangtani V (2004) An institutional analysis of supply chain innovations in global marketing channels. Industrial Marketing Management 33(1):57–64 Bloodgood J, Sapienza H, Almeida J (1996) The internationalization of new highpotential us ventures: antecedents and outcomes. Entrepreneurship: Theory and Practice 20(4) Bruce M, Daly L, Towers N (2004) Lean or agile: a solution for supply chain management in the textiles and clothing industry? International Journal of Operations and Production Management 24:151–170 Cagliano R, Caniato F, Golini R, Kalchschmidt M, Spina G (2008) Supply chain configurations in a global environment: a longitudinal perspective. Operations Management Research 1(2):86–94 Christopher M (1999) Logistics and supply chain management: strategies for reducing cost and improving service. International Journal of Logistics Research and Applications 2(1):103–104 Christopher M (2000) The agile supply chain: competing in volatile markets. Industrial Marketing Management 29(1):37–44 Christopher M (2005) Logistics and supply chain management: creating valueadded networks. Pearson education Christopher M, Lowson R, Peck H (2004) Creating agile supply chains in the fashion industry. International Journal of Retail & Distribution Management 32(8):367–376 Cook A, Brawer P, Vowles K (2006) The fear-avoidance model of chronic pain: validation and age analysis using structural equation modeling. Pain 121(3):195– 206 Frohlich M, Westbrook R (2001) Arcs of integration: an international study of supply chain strategies. Journal of Operations Management 19(2):185–200 Fujita M (1995) Small and medium-sized transnational corporations: salient features. Small Business Economics 7(4):251–271 Goldsby T, Griffis S (2006) Modeling lean, agile, and leagile supply chain strategies. Journal of Business Logistics 27:57–80 Golini R, Kalchschmidt M (2009) Threats of Sourcing Locally Without a Strategic Approach: Impacts on Lead Time Performances. In: Rapid Modelling for Increasing Competitiveness, Springer, pp 277–292
246
Ruggero Golini and Matteo Kalchschmidt
Gunasekaran A, Ngai E (2005) Build-to-order supply chain management: a literature review and framework for development. Journal of Operations Management 23(5):423–451 Hammond J (1991) Coordination as the basis for quick response: a case for ‘virtual’ integration in supply networks. Harvard Business School, Boston H¨ulsmann M, Grapp J, Li Y (2008) Strategic adaptivity in global supply chains Competitive advantage by autonomous cooperation. International Journal of Production Economics 114(1):14–26 Johanson J, Vahlne J (1990) The mechanism of internationalisation. International marketing review 7(4):11–24 Johnson J (2004) Factors influencing the early internationalization of high technology start-ups: US and UK evidence. Journal of International Entrepreneurship 2(1):139–154 Knickerbocker F (1973) Oligopolistic reaction and multinational enterprise. Graduate School of Business Administration, Harvard University Knight G, Cavusgil S (2004) Innovation, organizational capabilities, and the bornglobal firm. Journal of International Business Studies 35(2):124–142 Kotabe M, Omura G (1989) Sourcing strategies of european and japanese multinationals: a comparison. Journal of International Business Studies 20(1) Lee H (2004) The triple-A supply chain. Harvard Business Review 82:102–113 Lee H, Whang S (2000) Information sharing in a supply chain. International Journal of Manufacturing Technology and Management 1(1):79–93 Lee H, Padmanabhan V, Whang S (1997) Information distortion in a supply chain: the bullwhip effect. Management science 43(4):546–558 Levy D (1997) Lean production in an international supply chain. Sloan Management Review 38:94–102 Lindberg P, Voss C, Blackmon K (1998) International manufacturing strategies: context, content, and change. Kluwer Academic Pub Lowson B, King R, A H (1999) Quick response: managing the supply chain to meet consumer demand. Wiley Lowson R (2003) Apparel sourcing: assessing the true operational cost. International Journal of Clothing Science and Technology 15(5):335–345 Manolova T, Brush C, Edelman L, Greene P (2002) Internationalization of small firms. International Small Business Journal 20(1):9–31 Meixell M, Gargeya V (2005) Global supply chain design: A literature review and critique. Transportation Research Part E 41(6):531–550 Minner S (2003) Multiple-supplier inventory models in supply chain management: A review. International Journal of Production Economics 81:265–279 Mollenkopf D, Stolze H, Tate W, Ueltschy M (2010) Green, lean, and global supply chains. International Journal of Physical Distribution & Logistics Management 40:14–41 Nair A, Closs D (2006) An examination of the impact of coordinating supply chain policies and price markdowns on short lifecycle product retail performance. International Journal of Production Economics 102(2):379–392
Global Supply Chain Management and Delivery Performance: a Contingent Perspective
247
Naylor B, Naim M, Berry D (1999) Leagility: integrating the lean and agile manufacturing paradigms in the total supply chain. International Journal of Production Economics 62(1–2):107–118 Nkongolo-Bakenda J, Anderson R, Ito J, Garven G (2010) Structural and competitive determinants of globally oriented small-and medium-sized enterprises: An empirical analysis. Journal of International Entrepreneurship 8(1):1–32 Oviatt B, McDougall P (2005) Toward a theory of international new ventures. Journal of International Business Studies 36(1):29–41 Perona M, Miragliotta G (2004) Complexity management and supply chain performance assessment. A field study and a conceptual framework. International Journal of Production Economics 90(1):103–115 Prasad S, Babbar S (2000) International operations management research. Journal of Operations Management 18(2):209 Pyke D, Cohen M (1990) Push and pull in manufacturing and distribution systems. Journal of Operations Management 9(1):24–43 Steinle C, Schiele H (2008) Limits to global sourcing? Strategic consequences of dependency on international suppliers: Cluster theory, resource-based view and case studies. Journal of Purchasing and Supply Management 14(1):3–14 Tausch N, Hewstone M, Kenworthy J, Cairns E, Christ O (2007) Cross-community contact, perceived status differences, and intergroup attitudes in Northern Ireland: The mediating roles of individual-level versus group-level threats and the moderating role of social identification. Political Psychology 28(1):53–68 Van Donk D, Van Der Vaart T (2005) A case of shared resources, uncertainty and supply chain integration in the process industry. International Journal of Production Economics 96(1):97–108 Westhead P, Wright M, Ucbasaran D (2001) The internationalization of new and small firms: A resource-based view. Journal of Business Venturing 16(4):333– 358 Womack J, Jones D (1996) Lean thinking. Free Press WTO (2008) World Trade Report 2008 Yang B, Burns N (2003) Implications of postponement for the supply chain. International Journal of Production Research 41:2075–2090 Zou S, Cavusgil S (2002) The GMS: a broad conceptualization of global marketing strategy and its effect on firm performance. The Journal of Marketing 66(4):40– 56
In-Transit Distribution Strategy: Hope for European Factories? Per Hilletofth, Frida Claesson and Olli-Pekka Hilmola
Abstract In this research the in-transit distribution strategy is investigated by determining and analyzing key principles of the strategy. It is examined through a multiple case study and simulation. This research reveals that the in-transit distribution strategy is about considering goods that are being transported as a mobile inventory and actively dispatching goods to a destination, where there is a predicted demand before any customer orders are received. It can give major competitive advantages by offering rather short lead-times for customers without having to store products locally. This, in turn, gives lower warehousing costs, lower tied-up capital, a less interrupted manufacturing, and steady as well as continuous production volumes. It is a workable solution for European manufactures competing in distant market. To be successful with this strategy, it takes good planning, working closely with customers, first-class market knowledge, and a supporting enterprise resource planning (ERP) system. Other highlighted requirements are low variation in demand and predictable distribution lead-time. Simulation study of one hypothetical product group verified case study findings, but we find it interesting that especially manufacturing output variance is very sensitive regarding the overall results. Also increasing average customer demand results in undesired outcomes.
Per Hilletofth (B) and Frida Claesson School of Technology and Society, University of Sk¨ovde, 541 28 Sk¨ovde, Sweden, e-mail:
[email protected] Frida Claesson e-mail:
[email protected] Olli-Pekka Hilmola Lappeenranta University of Technology, Prikaatintie 9, 45100 Kouvola, Finland, e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 17,
249
250
Per Hilletofth et al.
1 Introduction Nowadays most supply chains are international, which means that raw material and products are procured from all over the world, transformed into new products in specific regions and then sold on a global market (Christopher et al, 2006). From a logistics point of view, globalization started with the use of so-called focused factories. The idea behind focused factories is that each factory produces only a limited range of products for the whole global market (Skinner, 1974). Lately, globalization has also led to increased competition, which can be deduced by increased product ranges, shorter product life cycles and increased customization (Christopher et al, 2004). Since suppliers still want to achieve economies of scale in production by the use of focused factories, and customers at the same time demand custom-made products and shorter lead-times, distribution becomes a key factor in order to become successful (Cohen and Lee, 1990; Fites, 1996; Waters, 2006). Distribution is executed through transportation process, which has recently been of interest in global manufacturing research, e.g., ever larger manufacturing companies, transportation delays, accuracy and new emerging market opportunities (Tyworth and Zeng, 1998; Wilson, 2007; Don Taylor et al, 2008; Ivanova and Hilmola, 2009). In the logistics literature there are traditionally two main strategies for distributing products to the market: The centralized distribution strategy and the decentralized distribution strategy (Muckstadt and Thomas, 1983; Olhager, 2000). The advantage of using the centralized distribution strategy is that it usually leads to a higher service level at a lower cost; the disadvantage is that customers may have to wait longer for their products (Jonsson, 2008). The decentralized distribution strategy often leads to shorter lead-times and a higher flexibility, the disadvantage being that products may have to be stored in several places, which generates higher warehousing costs (Jonsson, 2008). Depending on how many central warehouses there are in the global distribution system a change to distribution via local warehouses to central warehouses might have a considerable effect on the lead-time, but could still be interesting from a cost and tied-up capital point of view. In markets where the customer is used to and might also be demanding short lead-times conflicts often arise. When these conflicts occur it would be advantageous to have an alternative to the two main strategies that combines their advantages, i.e. an alternative that creates shorter lead-times than the centralized distribution strategy and gives lower costs and tied-up capital than the decentralized distribution strategy. In recent times the question has arisen about whether or not the in-transit distribution strategy might be exactly this alternative. The in-transit distribution strategy means that products or goods that are being transported are seen as mobile inventory and companies work actively with dispatching goods to a destination where they predict a demand before any customer order is received (Mangan et al, 2008). There are many complications with the intransit distribution strategy that companies have to handle in order to become successful. One of these complications which must be considered is if there is no customer order before the goods reach their destination. The supplier must have a plan for storing the goods after they have reached their destination and while waiting for
In-Transit Distribution Strategy: Hope for European Factories?
251
a customer order. It is also important that the costs for this intermediate storing are not too high and the handling not too complex. Another important aspect to consider is whether or not the ERP system or information system used is sufficient enough to support the strategy. Among other things the difficulty is to keep track of the goods and the quantities at the different locations, this complicates the calculation of a correct delivery date to the customer. The purpose of this research is to describe the in-transit distribution strategy by determining and analyzing key principles of the strategy as well as by illustrating its application in practice. The specific research questions are: (1) “What kinds of advantages can the in-transit distribution strategy provide?”; (2) “When is the intransit distribution strategy appropriate to use?”; and (3) “Can the in-transit distribution strategy be used as a complement in centralized/decentralized distribution structures?”. The applied research strategy has been a multiple case study and simulation approach, which corresponds well with the explorative and descriptive purpose of this research. Both the case companies (for anonymity reasons, hereafter called Alfa and Beta) are Swedish manufacturers that act on a global market within the chemical industry. Empirical data has been collected from various sources to enhance understanding by examining the research object from several perspectives. Firstly, this study is based on data gained from in-depth interviews with persons representing senior and middle managers in the case companies; logistics manager, transportation manager, and IT manager. The interviews have been conducted based on the same interview structure during 2009. A system dynamics simulation model concerning the use of in-transit distribution strategy in one product, one factory and one major market setting has been developed and analyzed to further verify case study findings.
2 In-Transit Distribution Strategy There is not a single distribution solution for all products, manufacturing units, markets, and acting companies (Don Taylor et al, 2008). It is important for companies to map out the needed prerequisites and thereafter chose a suitable strategy. For the traditional strategies, the centralized and the decentralized distribution strategy, work is already described in existing literature (Olhager, 2000), as well as questions regarding why and when to use them (Jonsson, 2008). One area that is not thoroughly described in the literature is how these traditional issues relate to the in-transit distribution strategy, and there are not many research papers done regarding the area of the in-transit distribution strategy. The in-transit distribution strategy means that products or goods that are being transported are seen as mobile inventory and suppliers work actively with dispatching goods to a destination where they predict a demand before any customer order is received (Mangan et al, 2008). The supplier should transport the goods to a destination close to where the expected demand will arise in order to quickly be able to send it to the customer. When choosing destination it is important to take into
252
Per Hilletofth et al.
consideration where the goods may be stored intermediately if the supplier has not received a customer order before the goods reach their destination. Due to the fact that the purpose of the in-transit distribution strategy is that goods are being dispatched before a customer order is received, and is based on an expected demand, the flow of goods is planned exclusively by forecasts. Hence, an efficient forecasting is essential in order to succeed using this strategy. To be able to distribute the goods to the customer in a cost-efficient way the supplier has to avoid intermediate storing and the need of this is determined by how well the forecasts correspond to reality. How to deal with the uncertainty in demand is traditionally based on using a safety stock, which is calculated by using normal deviation between forecast and actual outcome (Jonsson, 2008). This mode of procedure does not work for the in-transit distribution strategy; consequently the uncertainty has to be dealt with differently. Since the goods are already shipped when a negative deviation is observed the supplier must have had foresight and included a safety quantity in the delivery in order to compensate for the deviation. In the in-transit distribution strategy even positive deviations are severe when creating a need for intermediate storing. This means that safety quantities can cause troubles for suppliers and they have to find a proper balance between customer satisfaction and intermediate storing. As goods, when utilizing the in-transit distribution are typically supplied from a distant location, the demand in the market place should be stable with respect to quantity and regularity (Don Taylor et al, 2008). The in-transit distribution strategy could offer a shorter lead-time than the centralized strategy and lower warehousing costs and tied-up capital than the decentralized strategy and this is why it should be able to be complementary to certain types of products, customers and markets. In order to put the in-transit distribution strategy into practice in a capable way it is crucial to have an efficient information system able to tell where the goods are, where they are going, and when they will reach their destination. Any company using the in-transit distribution strategy has to be aware of that it is considerably harder to keep track of where the goods are when using a global distribution system. This reasoning follows the present research in distribution which emphasize that information systems have become a key factor for success. Another disadvantage with the in-transit distribution strategy is that it requires good forecasting, good cooperation in the supply chain and substantial knowledge about the market in order to achieve time- and place utility. If these demands are not fulfilled there is a major risk of the goods being transported unnecessary long distances and that they will be located on markets where there at the time (Mason and Lalwani, 2008). There are already models developed to aid when choosing distribution strategy. However, these models only deal with the centralized and the decentralized strategy. Harrison and van Hoek (2008) have developed a model, which bases its choice on three factors: (1) Short or long lead-time; (2) Low or high deviation in demand; and (3) Focus on distribution costs or focus on warehousing costs. These parameters are chosen because they have a great impact on the distribution. Since the in-transit
In-Transit Distribution Strategy: Hope for European Factories? Focus on distribution costs
1 Short leadtime
2
253
Focus on warehousing costs
3
4
5 1
1
Low deviation in demand
Decentralised distribution stra tegy 2
2 In-transit distribution strategy
3
3 Centralised distribution stra tegy
4 Long leadtime
5
4 Strongly centralised distribution stra tegy
High deviation in demand
5
Fig. 1 The choice of distribution strategy (adapted from Harrison and van Hoek (2008))
distribution strategy is a kind of a hybrid of the centralized and the decentralized strategy, it has been included in the model in Figure 1. The in-transit distribution strategy is placed where the customers demand relatively short lead-time and where the demand is stable. The strategy is not able to offer a very short lead-time in a cost efficient way when there is a high deviation in demand, since there is a risk of having a lot of intermediately stored goods while at the same time multiple scarcities might occur. The strategy has not been placed as a replacement but as a complement. In order to emphasize this, the in-transit distribution strategy has been placed in the figure with dotted lines. The purpose of the elaborated model is to help companies find the distribution strategy, or the combination of strategies, that best can take a company’s preconditions for specific products, customers, or markets into consideration. However, it is only a support for decision and the company might have to include other decision variables when choosing distribution strategy. In order to make the model more useful, every parameter has been graded with a one-to-five scale. This has been done in order to facilitate companies’ choices by helping them make a table of their customer segments and product groups where they estimate the value of each parameter. When utilizing this model a table should be created, which shows how the company estimates the different parameters per customer segment and product group. This is made with the starting point of what the customers demand and what the company prioritizes.
254
Per Hilletofth et al.
3 Case Study Findings 3.1 European Manufacturer Alpha Alfa is a multinational company that has built a global distribution system by the use of focused factories, central warehouses, and local warehouses. Primarily they use the centralized and the decentralized distribution strategy to supply their customers with products. Lately however, they have also been starting to use the in-transit distribution strategy to a limited extent. The in-transit distribution strategy is today used in the People’s Republic of China, in this paper referred to as China, as a complement to the traditional methods. It is primarily used for large customers with a stable demand and for high volume products. The aim of using the strategy is to be able to offer the customers short delivery times without having to store large volumes of goods locally. The in-transit distribution strategy is also used from a sales point of view, which means that the sellers in China, thanks to this strategy, are able to negotiate on the price at a much later time. It also means that they can offer prices that are adjusted to conditions on the market as close to the delivery date as possible. This is a crucial success factor on the sellers’ markets consisting of many competitors and where the customers mostly focus on cutting prices. The in-transit distribution strategy works in a manner that a plan for the expected demand is completed by using forecasts. The plan extends over three years and has a period length of a month. The forecasts and the plan are updated four times per year when the sellers make contact with the customers in order to see what the expected demand will be for the coming months. The plan contains information on which products to send, how much to send, which destination the products have, and when in time they should reach their destination. To guard against uncertainty in demand, Alfa sends a larger quantity than the expected demand. This they can do due to the fact that it is relatively easy to store the products that are not sold during the transport. The destinations are in this case ports and the company tries to choose ports that are as close to the predicted demand as possible as well as ports that have closeness to local warehouses. The moment the goods are supposed to reach their destination port is calculated to be as close as possible to the moment the expected demand will occur. Alfa sees the administration costs for the in-transit distribution strategy as a crucial area. The turnaround time is one hour more per order for the in-transit distribution strategy compared to an order that is for the centralized distribution strategy. This extra hour is a direct consequence of an insufficient information system that leads to a lot of unnecessary manual work.
In-Transit Distribution Strategy: Hope for European Factories?
255
3.2 European Manufacturer Beta Beta is a multinational company that has built a global distribution system by the use of 40 production plants in combination with both central and local warehouses. They are primarily using the centralized and the decentralized distribution strategy to supply their customers with products. However, lately they have, just like Alfa, started to use the in-transit distribution strategy, but to al limited extent. The in-transit distribution strategy is today used primarily for two customers in the United Kingdom (UK) instead of the traditional approach. These two customers are buying large volumes of standard products. The purpose of using this strategy is to be able to offer the customers the same delivery times as if production and warehouses would be located in the UK. Another reason was that it was difficult to keep the promised delivery times, when using intermodal transports. The in-transit distribution strategy then became a way of reducing the risks and keep delivery times. The in-transit distribution strategy is based on a forecast for which quantities to distribute are determined customer per customer. The forecasts are for one year ahead and are modified four times-a-year. The critical factor for when an order is placed is the customers historical order pattern in combination with the experience of the person placing the order. This person also has a continuous contact with the sales person in the UK to get information of the customers’ changed demand. The person placing orders decides on how long before predicted demand an order is supposed to be in port, but it is the carrier that decides which port to use. Since the demand varies Beta needs to guard against uncertainty, which they do by always ordering a certain amount more than needed. This is made possible by being able to easily store the goods in the port. Beta sees the administration costs for the in-transit distribution strategy as a crucial area due to the fact that most of the administration is handled manually by one person as there is a considerable lack of a sufficient information system. By not having a sufficient information system Beta cannot follow up on how long the goods are stored in the port.
4 Simulation Findings A system dynamics simulation model from the use of in-transit distribution strategy in one product, one factory and one major market setting has been developed to further verify case study findings (Fig. 2). In the model production is driven (left side of Fig. 2) by the end item inventory amounts, and when inventory position is below certain amount, production is allowed to produce (modeled with feedback relation). Release of produced end item inventory is followed by parameter “in-transit decision”; as end item inventory (production) reaches needed level (e.g., 400 units), “in-transit decision” (in the middle upper part of Fig. 2) checks with reorder point
256
Per Hilletofth et al.
Fig. 2 System dynamics simulation model for in-transit distribution strategy of one product to be supplied from one distant factory to one main market place
Fig. 3 Inventory amounts in foreign inventory as production is stable (35 units per week), but holding somewhat larger capacity per week as end demand (Random Uniform, Min. 20, Max. 40, Mean 26.5 and st. dev. 2), and transportation process having uncertainty in its delay (Random Normal, Min. 7, Max. 14, Mean 11 and st. dev. 2)
calculations from “foreign inventory” (takes into account standard deviation of demand and transportation lead-time variation), if more products should be sent for a distant market. Decision of sending products is based on the assumption of transportation lead-time variation as well as demand variation. These do not have connection on real demand generated by the model. After authoring production end-item inventory to send products for a long-distance market, the model contains time delay in overall transportation operation. In the case of Asian markets reached from Northern Europe, this delay is typically from 6–7 weeks up to 14. After this, trans-
In-Transit Distribution Strategy: Hope for European Factories?
257
portation process production lot (approx. 400 units, depends from the situation and needs) arrives into foreign inventory destination from where products are distributed for customers based on demand. However, as demand has always uncertainty, then inventory holding is inevitable despite efforts to be minimized with smart “in-transit decision”. Even if there exists two main variance sources in the model (end demand and transportation time), it is rather easy to select parameter values (e.g., production rate) and with incorporation of two aspects of uncertainty in reorder point, the performance becomes predictable. Figure 3 illustrates this situation, where foreign inventory has variation, but all of the customer demand is being met, and within 75 percent likely inventory holding (and needed space) is predictable. The Monte Carlo simulation feature enables to have such uncertainty modeled and tested in the system dynamics simulation model. However, possible caveats of in-transit distribution strategy are located in the low variation in production amounts of the home country as well as in the balance of production versus demand. Figure 4 illustrates the situation, when weekly production capacity is a bit higher than in the base situation (average is 37 units per week), but contains some small amount of uncertainty. In longer periods of time, in-transit distribution strategy may encounter supply problems. These problems are only fostered further, if average demand increases a bit, and production contains still earlier mentioned variation (Fig. 5). Of course, on the average, performance is good in these situations, but for supply organization having aims of high customer service and delivery accuracy, the magnitude of uncertainty in deliveries might be too much (especially in case of Fig. 5, but also Fig. 4).
Fig. 4 Inventory amounts in foreign inventory as production is having low variation (uniform distribution 35 to 39 units per week), but still holding somewhat larger capacity per week as uncertain end demand (same as before), and having uncertainty in the delay of transportation process (same as before)
258
Per Hilletofth et al.
Fig. 5 Inventory amounts in foreign inventory as production is having low variation (uniform distribution 35 to 39 units per week), but average customer demand increases only in very smallscale in uncertainty function (from 26.5 units per week to 29), and transportation process still having uncertainty in delay (same as before)
It can be concluded that the in-transit distribution strategy is workable, but requires low variation in demand, predictable transportation lead-time and low variation of manufacturing output.
5 Discussion and Conclusions The purpose of this research is to investigate the in-transit distribution strategy by determining and analyzing key principles of the strategy and by illustrating its application in practice. This research reveals that the in-transit distribution strategy is about considering goods that are being transported as a mobile inventory and actively dispatching goods to a destination where there is a predicted demand before any customer orders are received. This approach creates a competitive advantage by providing comparatively short lead-times without having to store the goods locally. This in turn gives lower warehousing costs and lower tied-up capital. Accordingly, the in-transit distribution strategy may be a good complement to the traditional distribution strategies for some categories of products, customers, or markets. This is especially true for European manufactures competing in distant markets such as Asia and America. The two involved case companies primarily use the centralized and the decentralized distribution strategy, however, they have started to use the in-transit distribution strategy to a limited extent. Alfa uses the strategy as a complement for large customers in China that have products with a stable demand and high volume, while Beta uses the strategy for two large-volume buying customers in the UK. Both case
In-Transit Distribution Strategy: Hope for European Factories?
259
companies use the in-transit distribution strategy because it generates competitive advantages by offering comparatively short delivery times without having to store large quantities locally, which in turn give lower warehousing costs and lower tiedup capital. For one of the companies the strategy also is used from a sales perspective, which is about the sellers’ ability to negotiate on the price at a much later time. This means that they can offer prices that are adjusted to conditions on the markets as close to the delivery date as possible. This is a critical success factor on markets consisting of numerous competitors and where the customers above all focus on cutting prices. One important issue that needs to be addressed to succeed with the in-transit distribution strategy is where to ship the goods. According to previous research concerning this subject, companies should transport the goods to a destination as close to the predicted demand as possible in order to be able to quickly respond on customer demand as orders arrive (Mangan et al, 2008). This is partly true for how the case companies operate. Alfa always tries to choose a destination port, where they have large customers or local warehouses; this approach is also important to achieve short delivery times to customers when supplying form the intermediate storage. For Beta the choice of destination port is not complicated, they only use the in-transit distribution strategy for two customers and transport the goods to the ports closest to these customers. According to the earlier studies, it is also important to consider where to have the intermediate storing if no customer order is received before the goods reach their destination. Both case companies have made their own solutions for handling the intermediate storing. Alfa deal with it by choosing their own destinations, in this case sea ports, close to local warehouses. If no customer order is received, products are temporarily stored at the local warehouse. Beta has primarily solved this problem by the negotiation of more free of charge days at the sea ports. How to plan the flow of goods is another important issue (Harrison and van Hoek, 2008). According to the literature this is about creating the most efficient forecasting as possible. This is somewhat true, and how the case companies currently work. Alfa creates a three-year plan and updates it by using a rolling forecast, which is accomplished through each important customer. The forecast is for one year ahead, and is modified four times per year. Beta creates one forecast per customer, which is for one year ahead and is modified four times per year. The forecast is, however, only used as a guideline and the actual transported volume is decided by the person placing orders. He or she works together with the sales personnel, and communicate continuously. The same way of thinking can be found at Alfa as well, but here it is the sales persons that update the rolling forecast. In order for the companies to use the in-transit distribution strategy to an increasing extent it is reasonable to assume that the forecasting needs adjustment. It is important to constantly work with improving the forecasting methods, and the data analyzed, to decrease the uncertainty and by that offer a higher level of service to the customers. A final important issue that needs to be addressed to succeed with the in-transit distribution strategy is how to deal with demand uncertainty (Don Taylor et al, 2008). According to the literature only negative deviations can be dealt with, by
260
Per Hilletofth et al.
the use of having a safety stock, when shipping goods. This is correct for both case companies. To guard against uncertainty in demand both Alfa and Beta are transporting a larger volume than the predicted demand (to maintain a high service level). Nevertheless it is important to make sure that the additional costs for a high service level do not exceed the additional revenues. According to the literature it is moreover important to take into consideration that neither of the positive deviations are desirable, since they create a need for intermediate storing. This results in that the safety stock might be creating problems. This is not something that the case companies have to deal with since they are able to create intermediate storing of the goods fairly easily. It may be argued that intermediate storing is perceived as a natural part of the in-transit distribution strategy at both Alfa and Beta, since neither of them work actively with reducing it. This attitude may be explained by the use of the strategy being driven by sales department rather than procurement. Both case companies are experiencing the administration of the in-transit distribution strategy and the associated costs as a key area. For example, the larger part of the administration is manually made in the companies, due to the lack of a sufficient information system. This has caused the companies not to follow up on important statistics. IT systems are important to be able to make the most use of the in-transit distribution strategy and therefore this is something the companies should work actively on improving. Simulations findings reveal that uncertainty of demand is a key issue of the intransit distribution strategy. The strategy is only applicable in market environments where demand variation is low. In high variation markets either customer satisfaction will be too low or demand for intermediate storing too high. Other requirements to succeed with the in-transit distribution strategy highlighted in the simulation are low variation of manufacturing output and predictable transportation leadtime. Transportation delay between continents, especially from Europe to Asia, is considerable (5–10 weeks) and problematic since the safety quantity mechanism is different. In essence, the safety quantity only covers variation in demand since predicted quantity and safety quantity is shipped at the same time. Interesting aspects for further research would be to continue the study of the involved case companies (particular concerning requirements of predictable transportation lead-time and low variation of manufacturing output), enlarge the study to include more case companies from other businesses, and to extend the simulation model.
References Christopher M, Lowson R, Peck H (2004) Creating agile supply chains in the fashion industry. International Journal of Retail & Distribution Management 32(8):367–376 Christopher M, Peck H, Towill D (2006) A taxonomy for selecting global supply chain strategies. International Journal of Logistics Management 17(2):277–287
In-Transit Distribution Strategy: Hope for European Factories?
261
Cohen M, Lee H (1990) Out of the touch with customer needs? Spare parts and after sales service. Sloan Management Review 31(2):55–66 Don Taylor G, Love D, Weaver M, Stone J (2008) Determining inventory service support levels in multi-national companies. International Journal of Production Economics 116(1):1–11 Fites D (1996) Make your dealers your partners. Harvard Business Review 74(2):84–95 Harrison A, van Hoek R (2008) Logistics management and strategy. Person Education Limited, Essex, UK Ivanova O, Hilmola O (2009) Asian companies and distribution strategies for Russian markets: Case study. International Journal of Management and Enterprise Development 6(3):376–396 Jonsson P (2008) Logistics and Supply Chain Management. McGraw-Hill, London, UK Mangan J, Lalwani C, Butcher T (2008) Global logistics and supply chain management. John Wiley & Sons, London, UK Mason R, Lalwani C (2008) Mass customized distribution. International Journal of Production Economics 114:71–83 Muckstadt J, Thomas L (1983) Improving inventory productivity in multilevel distribution systems. In: Productivity and Efficiency in Distribution Systems, Gautschi, DA (ed), Elsevier, New York pp 169–182 Olhager J (2000) Produktionsekonomi. Studentlitteratur, Lund Skinner W (1974) The focused factory: New approach to managing manufacturing sees our productivity crisis as the problem of how to compete. Harvard Business Review 52(3):113–121 Tyworth J, Zeng A (1998) Estimating the effects of carrier transit-time performance on logistics cost and service. Transportation Research Part A 32(2):89–97 Waters C (2006) Global Logistics: New directions in supply chain management. Kogan Page, London, UK Wilson M (2007) The impact of transportation disruptions on supply chain performance. Transportation Research Part E 43(4):295–320
Effect of component interdependency on inventory allocation Yohanes Kristianto Nugroho, AHM Shamsuzzoha and Petri T. Helo
Abstract The objective of this research approach is to improve the responsiveness and agility of the supply chain network by considering the allocation of inventory control, especially for the allocation of safety stock. This is achieved through considering the effect of components interdependencies and offering guaranteed lead times. An analytical model is presented in this paper, which is supported by discrete event simulation model in order to investigate the effect of material interdependency on the reduction to safety stock allocation. A case example from lead acid battery manufacturing supply chain network is used to demonstrate the applicability of the models. The results, by applying design structure matrix (DSM), showed that less material interdependency reduces the safety stock allocation significantly. The material interdependencies are reduced through clustering operation. The results also showed that reduction of material interdependency reduces the unnecessary investment in inventory management. The difference between the presented analytical model and the discrete event simulation is not significant, which also validate the proposed modeling approach. Key words: inventory allocation, component interdependency, design structure matrix (DSM), supply chain management, safety stock
Yohanes Kristianto Nugroho (B), AHM Shamsuzzoha and Petri T. Helo Department of Production, University of Vaasa, Finland, e-mail:
[email protected] AHM Shamsuzzoha e-mail:
[email protected] Petri T. Helo e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 18,
263
264
Yohanes Kristianto Nugroho et al.
1 Introduction Inventory allocation is a major problem in supply chain management in order to decouple the uncertainty among various stages of stockholding points and to minimize the total holding costs (Graves, 1999). Current literature on inventory allocations are mostly concentrated on the decoupling of safety stock through dynamic lot sizing (Graves et al, 2000; Neale and Willems, 2009; Kristianto and Helo, 2010). However, this strategy has a negative impact on the bullwhip effect due to the result of order magnification. This discrepancy encourages supply chains for collaborating their demand planning operations to minimize total holding costs and to promise on lead times. Sometimes this effort cannot be accomplished if the supply chain is decentralized, where the role of one chain cannot be controlled by other chains. In addition to the inventory allocation problem, the effect of product development are mostly correlated to the increasing of the design commonality as much as possible in order to reduce the inventory level and allow wider product proliferation (i.e. Collier, 1980, 1981; Martin and Ishii, 1996; Jiao and Tseng, 2000).Lee (1996) demonstrated the effect of time and form postponement that are also supported by commonality strategy. However, material or component interdependency is also important to be considered into discussion. The material interdependency can be defined as a degree of dependency of the material processing to the other material or component. For instance,the positive plate separator envelope of a lead acid battery must have to consider the design of the positive plate as both need to be assembled together within a battery container. If there are any misalignments within the design of the envelope, the battery cannot be assembled properly. In considering the limitation of the current efforts for solving the inventory allocation, commonality strategy is not sufficient to mitigate the negative effect of high component interdependency. Without reducing component interdependency, the supply chain networks must be aware of the effect of supply uncertainty cause by supply quality and manufacturing difficulties (Evans, 1963, 1970). This is the reality that individual companies’ production departments face everyday. Product manufacturability is often related to product and process redesign (Lee, 1996). In application, product design should consider how the product is manufactured efficiently with minimum failures. Once this problem is overcome then it is easier to consider the postponement strategies either form or time postponement (Bucklin, 1965; Pagh and Cooper, 1998). This will ensure the decoupling of the demand uncertainty and also promise the reducing of lead times with less total inventory costs (Mikkola, 2007). The study presented in this article extends the previous studies of inventory allocation by incorporating the effect of material interdependency. The objective of this study is to highlight the importance of material management process in terms of material design and supply chain to inventory allocation. The main focus of this paper is inventory allocation that will be developed to make it possible to allocate inventory stock as minimum as possible through exerting additional focus on material management (design) strategy with the view to maximize the benefits of inventory allocation. The design structure matrix (DSM) tool or methodology, which is
Effect of component interdependency on inventory allocation
265
a compact matrix representation of design architecture, is also implemented in this research with the view to track the flow of information dependencies among design elements or project tasks. The DSM tool provides the insights of a complex design structure and formulates the clustering of components for developing modules. The remainder of this paper is arranged as follows. Section 2 explains the different analytical models in terms of supply chain process, forecasting, production planning and extension of component/material interdependency. Section 3 validated the analytical models with a case product and DSM tool. In this section the result are simulated to compare the safety stock before and after of the component/material interdependency. Section 5 concludes this paper through discussing various aspects of the information behind simulation results.
2 Analytical models The objective of this research is to extend the Kristianto and Helo’s (K-H) modeling framework from stationary to non-stationary demand (Kristianto and Helo, 2010). For clarity or completeness, we restate the key assumptions from the K-H model and introduce some additional assumptions in order to allow the consideration of non-stationary demand. Finally, this research is concluded with a brief discussion of these additional assumptions in the context of safety stock placement for nonstationary demand. It is recommended that potential reader consult the K-H model in order to understand and to justify the original set of assumptions.
2.1 Supply chain process We solve the manufacturing process design by decomposing the multi-stage supply chain strategic inventory location model into N-stages, where N is the number of activity point in the supply chain. We define the node (n) as any activity point in supply chain (material receiving and inspection, work in process (WIP) or intermediate product processing and final product processing). Thus in each location, the supply chain has at least one activity point (i.e. material receiving or inspection for warehouse), two activity points (i.e. material and final product processing for assembly plant) or at most three points (i.e. complete chemical factory). For each node (n), we define sn as the reorder point to the upstream (n) for delivering material up to Sn (t + Ld(n−1) ), where Ld(n−1) to be the delivery lead times from upstream stage (n − 1) product inventory qn−1 (t + Ld(n−1) ) to the stage (n) material inventory Im(n) (t + Ld(n) ). Stage (n) needs to convert the material into product during production time L pn and finally deliver to the stage n + 1 during delivery time Ld(n) . Thus the total promised lead times of stage (n) or D(t + Ln ) comprises material delivery time Ld(n−1) , production time L pn , and delivery time Ld(n)
266
Yohanes Kristianto Nugroho et al.
or Ln = Ld(n−1) + L p(n) + Ldn . Since the demand process is non-stationary then the importance of forecasting accuracy is parallel to inventory control.
2.2 Forecasting We use the exponential smoothing forecast model to estimate future demand. The reason is that the smoothing constant α of the forecast model represents the demand properties. If α = 0 then the demand follows a stationary process with the mean λ , otherwise it follows a non-stationary process. This explanation comes from ARIMA demand process (Box et al, 1994). Non-stationary demand process depends on α as a measure level of the inertia in the process. This paper uses k-th order Erlang functions to measure the smoothing parameter α , which are characterized by two parameters as the average time spent in the period of sampling and the order of the period of sampling. The order of the period of sampling is stated as k=
Ln σd
2 .
(1)
Where, Ln is average delivery lead time to meet demand and σd is the dispersion or standard deviation of the lead time. We use k = 1 to represent maximum dispersion. Average lead time needs to be considered since the order/customer demand does not always arrive everyday. Furthermore, by setting k = 1 we anticipate high demand inter-arrival rate variance (Lev´en and Segerstedt, 2004)). In relating to k into α then we use a relationship that the value of α reach its peak (α = 1.0) when k closes to infinity and k is an integer. Conversely, when k hits its lowest value (k = 0) then the value of α = 0.1 to represent the most sluggish response. Thus we have the following relationship between α and k.
(2) α = 0.1 + 0.9 × 1 − e−(k−1) . Afterward, the exponential smoothing forecasting can be represented as follows Fn (t + 1) = Fn (t) + α × (Dn (t) − Fn (t)) .
(3)
Where Fn (t) and Fn (t + 1) are forecasting results for time t and (t + 1).
2.3 Production planning We assume that in each period t, the observed demand from period t to t − Ld(n) or D(t − Ln(d) , t) is used to make demand forecasting from period t + Ld(n) to
Effect of component interdependency on inventory allocation
267
t + 2Ld(n) or Fn (t + Ln(d) ,t + 2Ln(d) ). Stage (n) issues the production plan to convert material to final product from period t or μn (t) to period t + L p(n) where L p(n) is the processing rate, and then fill demand from inventory as much as delivery rate qn (t + Ld(n) ) = min(D(t), In (t)). Then we have order lot size to stage (n − 1) as follows On t,t + Ln(d) = Fn t + Ln(d) , t + 2Ln(d) − In (t) , if Fn t + Ln(d) ,t + 2Ln(d) − In (t) > 0 (4) 0, otherwise. Equation (4) signifies that the order lot size to stage (n − 1) must cover demand during lead time Ld(n) and its size depends on the forecasting accuracy. In responding dynamic lot size, production rate μn is flexible depends on Kn such that μn ≤ Kn , where it affects inventory adjustment to be also dynamic. We use automatic pipeline inventory and order based production control system (APIOBPCS). In short, APIOBPCS incorporates inventory level adjustment time and demand changes traction for controlling the production rate μn . Thus we recognize inventory adjustment time (Ti ), work-in-process (WIP) inventory adjustment time (Tw ), order lot size to cover delivery lead times On (t,t + Ln(d) ) in controlling the production rate at any time μn (t) as μn (t) = D (t) + Δ In t + L p(n) + Δ WIPn t + LWIP(n) (5) D (t) × Ld(n) − μn (t) × L p(n) Δ In t + L p(n) = Ti Imn (t) − μn (t) × L p(n) Δ W IPn (t + LWIP (n)) = . TW
(6) (7)
We can see from (6) that additional product inventory is needed to cover backorders (D(t) × Ld(n) − μn (t) × L p(n) )+ by increasing production rate at any time μn (t) and vice versa. Similarly, semi finished product (WIP) needs also to be adjusted by considering the material availability at any time Imn (t) and production rate at any time μn (t) by giving a constraint that the resulted μn (t) cannot exceed Kn .
2.4 Extension to the effect of component/material interdependency This section extends the problem of inventory allocation into supply interdependency. The reason is that the delivery tardiness probability is positively affected by material quality. For minimizing this effect most manufacturers usually set bigger safety stocks. Material safety stock is calculated by considering three common components, namely demand uncertainty σn , delivery lead times Ln(d) and service level zn . Since most of the literatures in inventory allocation consider the first two components (c.f. Lee, 1996; Lee and Padmanabhan, 1997; Graves, 1999; Graves
268
Yohanes Kristianto Nugroho et al.
et al, 2000; Graves and Willems, 2008; Neale and Willems, 2009; Kristianto and Helo, 2009, 2010), then service level is rarely been discussed. However, the implication of operations service level is important to be considered since it links inventory allocation to product development. To focus discussion let say A1 and A2 are failure probability for materials at stage (n − 2) and (n − 1) respectively and their occurrences are not mutually exclusive. Thus the probability for creating delivery tardiness due to the failure during testing or manufacturing of two coupled materials p f (2) = A1 + A2 − (A1 × A2 ) for An = 1 − zn . Further, for N coupled materials then we have failure probability as much as p f (n) =
A(n−x) − A(n−1) ∩ A(n−2) ∩ .. ∩ A(1) .
1
∑
(8)
x=n−1
The first component of (8) represents the probability of failure due to one of N coupled materials. The second component represents joint probability for two or more components failures together. We use this formulation since in a non-mutually exclusive event, material failure can occur altogether so that we use (8) to show that even only one failure for coupled materials can break down the total product manufacturing. We can pool the risk of this failure by clustering the coupled operations and supply them from the same to minimize p f (n) . Clustering some materials into one module gives benefit to assembly operation by reducing testing and manufacturing time. Fewer number of module interfaces as compared to components makes fewer number of testing items and thus shorter manufacturing time. In addition, fewer number of testing items reduces the probability for making failure during testing and manufacturing. Then (8) will be changed into joint probability of failure for non-mutually exclusive event as follows pm f (n) = max A(n−1) , A(n−2) , .., A(1) . (9) We can compare from (8) and (9) that the failure probability reduces from p f (n) to pm f (n) as much as p f (n) − pm f (n) =
1
∑
A(n−x) − max A(n−1) , A(n−2) , .., A(1)
x=n−1
− A(n−1) ∩ A(n−2) ∩ .. ∩ A(1) .
(10)
Furthermore, reducing value of (A(n−1) ∩ A(n−2) ∩ .. ∩ A(1) ) supports the risk pooling effort by minimizing the number of interactions (i.e.: sharing functionality, interfaces etc.) within module amongst materials. We have implemented the design structured matrix (DSM) methodology to minimize the interactions which can be explained as follows. Figure 1a, displays nine components of a product namely A, B, C, D, E, F, G, H, I and their interdependencies. For instance, component C needs information from component B, while component A and H needs information from component C to
Effect of component interdependency on inventory allocation
A
D
E
A B C D E F G H I
G
F
C
I B
H
269
A B C D E F G H I X X X X X
X
X X
(a)
X (b)
Fig. 1 (a) Component interdependencies graph, (b) DSM representations of components interdependencies graph (unclustered)
be completed. This is presented in a matrix format in Fig. 1b. All other information exchanges or interdependencies among components are displayed in Fig. 1b. In order to reduce the iteration time, upper diagonal marks need to be brought as close to the diagonal line as possible. This is done by clustering, where the rows and corresponding columns are rearranged to get clusters or modules. Figure 2 shows two overlapping clusters.
A C B F I G H D E
A C B F X X
I G H D E
X X
X X
X X X
Fig. 2 DSM representations of components interdependencies (clustered)
3 Model validation In making a validity test, the model is simulated by using the academic version of Goldsim simulation software and we observe the difference between analytical calculation and the dynamic simulation. We will investigate whether those differ-
270
Yohanes Kristianto Nugroho et al.
ences are magnified at the increased demand volatility. We use an example of a lead acid battery assembly to represent the need for component clustering. The bill-ofmaterials (BOMs) of the lead battery is presented as in Fig. 3. In practice, lead acid battery manufacturing represents the real supply chain since one chain will not cover the complete processes (e.g. negative and positive plates, envelope separator manufacturing). All of those chains are decentralized where battery assembly receives all materials from different suppliers. The manufacturing process is started from the plates assembly (negative and positive plates, envelope separator manufacturing) using post strap, before they are delivered to final assembly where all other elements are joined together in a container. We will measure the effect of component clustering to reduce the interdependency with the view to reduce inventory by utilizing the modeling approach of product and process design as shown in Fig. 4.
Battery
Vent plug
Cover
negative plate
Container
Envelope separator
Plate lugs
Positive plate
Partition connector
Element rests
Post strap
Fig. 3 BOMs of the lead acid battery
In Figure 5, by implementing the DSM tool, we have displayed in matrix format the interdependencies of the components as presented in Fig. 4. In this figure the marks ‘1’ signify the dependencies of each component with others. All the components in the matrix are clustered by changing the position of corresponding column and row in order to form the required clusters or modules. The resulted clustered matrix is displayed in Fig. 6. We could observe from Fig. 6 that after clustering operation from the original components interdependencies as presented in Fig. 5, that two modules are formed. First module consists of the components ‘envelope separator’ and ‘positive plate’ while the second module consists of the components ‘vent plug’ and ‘container’. In reality these four materials are separated so that it frequently creates failures in the assembly process. It is also shown from Fig. 6 that material clustering reduces the number of iteration or feed back loop from nine to two which saves the time for assembly process. The clustering process not only reduces the lead time through minimizing iteration number and assembly time, but also the reduction of interdependency among components result is beneficial to the total inventory allocation too. The final simulated result of the module 1 consisting of the component ‘envelope separator’ and
Effect of component interdependency on inventory allocation
271
Inventory control policy
Delivery delay Ln (d )
WIP adjustment
Delivery rate q n (t)
Inventory adjustment
Production rate μ n (t)
ΔWIP(n ) (t )
Production delay Ln ( p )
ΔI (n) (t )
Total lead times Ln
DSM
clustering
Demand from downstream D(t )
Material and product inventory level I m (n ) (t ), I n (t )
Exponential smoothing F (t ) TW , Ti , Tp ,α *
Delivery rate from upstream q n -1 (t)
Order rate
On (t, t + Ln(d ) )
Fig. 4 Overview of product and process design modeling approach
‘positive plate’ are presented in Table 1 using (8) and (10) subsequently. The results as presented in Table 1, are compared afterwards by implementing both pure analytical model and simulation model. In this comparison, we have measured only the savings in the safety stock as an example through applying the DSM methodology in inventory allocation. Table 1 shows that stage-n (battery assembly plant) and stage (n-1) (envelope separator and positive plate module) reduce the safety stock significantly and take benefit from material clustering. We simulated this output considering the demand volatility at 20% in case of stationary and 30% for non-stationary with the view to represent high demand variance. We observed that at some events there is significant difference between dynamic simulation and analytical modeling results. Source of this difference may come from the demand processes in the analytical modeling and dynamic simulation is different. The analytical modeling applies general distribution where demand inter-arrival time and the process variance follow Erlang distribution with predetermined order k=1,2,..,k. However, dynamic simulation can recalculate the order-k every review period to hedge against randomness of the demand process. This review gives benefit by treating the demand appropriately
272
Yohanes Kristianto Nugroho et al.
Name Negative plate Envelope separator Positive plate Post strap Vent plug Cover Container Plate lugs Partition Element rests
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10 1 1 1 1 2 1 1 3 4 1 5 1 6 1 1 1 1 1 1 7 1 1 1 1 8 1 9 1 1 1 10
Fig. 5 Lead acid battery original interdependency matrix (before clustering)
Fig. 6 Lead acid battery interdependency matrix using DSM (after clustering)
whether it follows non-stationary process with random walk or it is simply stationary with some level of demand dispersion. This difference suggests us to apply the analytical model into dynamic simulation for giving more realistic result. In addition to the benefit of component interdependency reduction, Table 1 also shows that the developed analytical and simulation models capable of overcoming both high and low demand rates. We can see from the Table 1 that there is not any significant difference between the two models either with or without DSM application. Thus it validates the models capability to handle demand rates and supply uncertainty at low and high rates.
Effect of component interdependency on inventory allocation
273
4 Conclusion This paper highlights the importance of integration of the product and process design in order to reduce inventory levels or costs. Ignoring this integration may lead to unnecessary investment in safety stock that might cause supply and demand uncertainties. Product redesign itself proves to be important in order to mitigate the supply chain uncertainty that also opens opportunities for material clustering. This material clustering process makes the demand forecasting more effective, which also offers an inventory control process more flexible to the supplie,r without considering order magnification. The simulation result, as presented in this research, prove this situation through not excessively increasing the requirements of safety stock at higher demand level. This also signifies the effectiveness of the forecasting method that is used by the manufacturing organization. Finally, we concluded this paper through validating the proposed analytical models and by using simulation software, which shows that there are no significant differences between these two approaches in terms of inventory allocation and safety stock. Table 1 Safety stock comparison before and after material interdependency reduction at different demand rates by using analytical and simulation models Before DSM σVA = 0.2
safety stock
λn = 200
Stage-n stage n − 1 λn = 250 Stage-n stage n − 1 λn = 275 Stage-n stage n − 1 λn = 310 Stage-n stage n − 1
simulation analytical difference model 125 192 161 204 182 206 204 205
135 197 169 200 186 215 210 218
−10 −5 −8 4 −4 −9 −6 −13
After DSM σVA = 0.2
λn = 200
safety stock
Stage-n stage n − 1 λn = 250 Stage-n stage n − 1 λn = 275 Stage-n stage n − 1 λn = 310 Stage-n stage n − 1
simulation analytical difference model 61 64 77 77 89 94 106 102
68 72 83 77 93 102 105 109
−7 −8 −6 0 −4 −8 1 −7
References Box G, Jenkins G, Reinsel G (1994) Time series analysis: forecasting and control, 3rd Edition. Holden-Day San Francisco Bucklin LP (1965) Postponement, speculation and the structure of distribution channels. Journal of Marketing Research (JMR) 2(1):26–31 Collier D (1980) Justifying component part standardization. In: Proceedings of the 12th National AIDS Meeting, American Institute for Decision Sciences, Las Vegas, NV, p 405
274
Yohanes Kristianto Nugroho et al.
Collier D (1981) The measurement and operating benefits of component part commonality. Decision Sciences 12(1):85–96 Evans D (1970) A note on modular design - a special case in nonlinear programming. Operations Research 18(3):562–564 Evans DH (1963) Modular design - A special case in nonlinear programming. Operations Research 11(4):637–647 Graves SC (1999) A single-item inventory model for a nonstationary demand process. Manufacturing & Service Operations Management 1(1):50 Graves SC, Willems SP (2008) Strategic inventory placement in supply chains: Nonstationary demand. Manufacturing & Service Operations Management 10(2):278–287 Graves SC, Willems SP, Zipkin P (2000) Optimizing strategic safety stock placement in supply chains. Manufacturing & Service Operations Management 2(1):68 Jiao J, Tseng MM (2000) Understanding product family for mass customization by developing commonality indices. Journal of Engineering Design 11(3):225–243 Kristianto Y, Helo P (2009) Strategic thinking in supply and innovation in dual sourcing procurement. International Journal of Applied Management Science 1(4):401–419 Kristianto Y, Helo P (2010) Built-to-order supply chain: response analysis with control model. International Journal of Procurement Management 3(2):181–198 Lee HL (1996) Effective inventory and service management through product and process redesign. Operations Research 44(1):151 Lee HL, Padmanabhan V (1997) Information distortion in a supply chain: The bullwhip effect. Management Science 43(4):546 Lev´en E, Segerstedt A (2004) Inventory control with a modified croston procedure and erlang distribution. International Journal of Production Economics 90(3):361–367, production Control and Scheduling Martin M, Ishii K (1996) Design for variety: a methodology for understanding the costs of product proliferation. In: Wood K (ed) Design Theory and Methodology DTM’96, ASME, Irivine, CA, 96-DETC/DTM-1610 Mikkola JH (2007) Management of product architecture modularity for mass customization: Modeling and theoretical considerations. IEEE Transactions on Engineering Management 54(1):57–69 Neale JJ, Willems SP (2009) Managing inventory in supply chains with nonstationary demand. Interfaces 39(5):388–399 Pagh JD, Cooper MC (1998) Supply chain postponement and speculation strategies: How to choose the right strategy. Journal of Business Logistics 19(2):13–33
Dynamic Nature and Long-Term Effect of Events on Supply Chain Confidence Harri Lorentz and Olli-Pekka Hilmola
Abstract Supply chains have a key role in the competitiveness of companies, and therefore responsible managers may have great expectations on the chosen network structure to perform consistently. However, decision maker confidence on the supply chain performance may also decrease, reflecting past experience or forecasted problems. In this research, the interest is on the manager, who makes decisions for example on routes, manufacturing locations and the structure of the supply chain in general. Our aim is to conceptualise supply chain confidence, and propose a dynamic hypothesis on the behaviour of supply chain confidence in relation to supply chain performance and events. Based on a multiple case study we present a dynamics hypothesis, which illustrates a system of element relationships that explain confidence erosion during negative event chains, as well as eventual policy and configuration reactions. From the economic and regional development point of view, our research underlines the importance of understanding tipping points in managerial decision making and the feasibility of supply chain operations, in order to sustain FDI and participation in global production networks. Key words: supply chain confidence, cognitive psychology, system dynamics
1 Introduction: the Key Construct In defining supply chain confidence (SCC), the key theoretical construct of this research, we begin from the field of cognitive psychology, where Griffin and Tversky (1992, p.412) define confidence in general as the degree of belief in a given hypothesis, varying between full confidence and zero confidence (i.e. confidence Harri Lorentz (B) Turku School of Economics, Finland, e-mail:
[email protected] Olli-Pekka Hilmola Lappeenranta University of Technology, Finland, e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 19,
275
276
Harri Lorentz and Olli-Pekka Hilmola
range is [0,1]). Among others, confidence has a central role in behavioural finance literature, where studies draw on this construct in order to explain investor sentiment, and consequent under and over reactions in the stock market due to for example earnings announcements and series of good or bad news (e.g. Barberis et al, 1998; Daniel et al, 1998). In terms of specific insight on supply chain confidence, we are left to draw on the work of Christopher and Lee (2004, p.393), who consider this key construct to reflect the perception of performance reliability at each step in the chain. In operational terms, the decision maker may lack confidence in order cycle time, order current status, demand forecasts, supplier delivery capability, manufacturing capacity, product quality, transportation reliability, and services delivered. Christopher and Lee (2004) also elaborate on the drivers and outcomes of lack of confidence in supply chains, and introduce the so called risk spiral, where lack of visibility and control in the supply chain, contribute to the lack of SCC. Lack of SCC in turn induces the build-up of buffers in the form of inventory, resulting in increased exposure to financial risk. Higher inventory levels cause longer material flow times (Little, 1961; Chopra and Meindl, 2001), resulting in long pipelines, and further increase in lack of visibility and control. The other outcomes, comparable to inventory build-up, may be in the form of over ordering, lack of information on availability, forecast inaccuracy, over production, excess capacity, product launch delays, markdowns, and excess in lead time quotes. In the following, we elaborate on the drivers, effects and the dynamic nature of SCC in the form of a literature review, with focus on the latter. A brief section on methodology follows. Next, three cases are presented, after which a cross-case analysis is offered. Conclusions and discussion bring the paper to a close.
2 The Dynamic Nature of Supply Chain Confidence A central theoretical construct in the field of supply chain management (SCM), namely supply chain uncertainty, also relates to the previously mentioned risk spiral. Davis (1993) identified sources of uncertainty in the supply chain as supplier performance, manufacturing process, and customer demand, while Geary et al (2002) added control process as one of the sources, i.e. the transformation process of customer orders into production schedule and purchase orders. In other words, these scholars perceive uncertainty as a closely related term with variation and inconsistency in performance. On the other hand, Van der Vorst and Beulens (2002, p.413) emphasize the unknowns (e.g. information about the chain, its environment and own influence) of the decision maker with respect of supply chain performance. In addition to uncertainty, a related concept, namely supply chain risk, is relevant in understanding SCC dynamics. The definition of risk is multifaceted and elusive in the literature (see e.g. Manuj and Mentzer, 2008), however, we confer with the definition provided by for example Bernstein (1998), whose broad treatise on the topic is based on risk perceived as a probability of a certain event, in contrast to one
Dynamic Nature and Long-term Effect of Events on Supply Chain Confidence
277
definition of uncertainty: i.e. the perceived inability to make accurate predictions (see also Milliken, 1987), or probability statements. High perceived probability, i.e. risk of supply chain disruptions, or events, implies a low degree of belief in the hypothesis that the as is supply chain configuration (Srai and Gregory, 2008) will produce intended levels of performance and customer service outcomes, or in other words low SCC). In an emerging market context, Lorentz (2009) has categorised the impact of various contextual constraints or bottlenecks for SCM into performance effects, or issues, and configuration effects, the latter meaning strategic long- to medium-term adjustments in terms of supply chain configuration and policies, while the former encompasses the impact on day-to-day supply chain performance, embodied in for example KPIs. In considering the full variety of outcomes due to the lack of SCC, one may identify moderate deficiencies leading to short-term actions and mediumterm policies that imply buffers, while severe and sustained lack of confidence (e.g SCC below 0.5 or close to 0) may lead to divestment and other strategic adjustments in for example the global manufacturing footprint of a company, such as production facility shut-downs (Koskinen and Hilmola, 2009). Strategically adjusting supply network configuration or structure provides a way to break free from the risk spiral, in the form of improved or ceased operations. According to Griffin and Tversky (1992, p.412), assessment of confidence requires the integration of different kinds of evidence. They go on to distinguish between the strength and weight of evidence, i.e. the extremeness and salience, as well as the statistical informativeness and predictive validity of it, respectively (Barberis et al, 1998). This categorisation fits well with what has been presented previously, as the performance gap, varying from small deviations to major disruptions, embodies the strength of evidence, while risk or uncertainty about the probability of further disruptions and low performance is related to the sample size, or the length of experience, from the given supply chain context. Importantly, Griffin and Tversky (1992) establish the theory of strength of evidence dominating over weight of evidence. Decision makers tend to disregard sample size in situations, where competing hypotheses are evaluated (e.g. functional vs. dysfunctional supply chain), that is, managers are often overconfident despite the facts, when major events impact for example the supply chain. Therefore, in terms of SCC dynamics, recovery from low levels after major disruption may be very difficult to achieve. Similar tendencies in managerial behaviour in general have been suggested by March and Shapira (1987). The difficulty of recovery from low levels of SCC may be strengthened by what is called the asymmetrical effects of positive and negative events in the psychological literature (Taylor, 1991, p.70): negative information is generally weighted more heavily than positive information, although systematic exceptions have been identified. Damping down or muting of negative experiences, taking place e.g. in the autobiographical memory (Walker et al, 2003), may not be so prevalent in business settings, as costs or loss of money tend to weigh more in comparison to potential gains in risky ventures (Taylor, 1991). Further, Duhaime and Schwenk (1985) identify cognitive simplifying processes in acquisition and divestment decision making,
278
Harri Lorentz and Olli-Pekka Hilmola
and conclude that through single outcome calculation, once a certain threshold has been reached for example in terms of low SCC, managers focus on a single goal and alternative (such as divestment), instead of considering all alternatives equally. However, contractionary configuration effects, such as route or facility shutdowns, are typically delayed, first due to possibly lengthy corporate decision processes, as decisions may be simply delayed or delegated to others in situations with great risk or uncertainty (March and Shapira, 1987), and second, due to escalating commitment, i.e. the managerial tendency to increase investment in the face of poor and declining performance (Duhaime and Schwenk, 1985, p.291). In other words, it takes time for the negative news to reach and penetrate the attention of decision makers (Morecroft, 2007), it takes time to make a decision under uncertainty, and in addition, it may be difficult to admit poor past decisions and give up pet projects. Delays naturally give time for the SCC to recover if performance targets are met, however, this may be in vain due to the previously mentioned single outcome calculation. From the above discussion, propositions on SCC dynamics may be distilled (Yin, 2003). First, relevant drivers of SCC are evidence strength (current performance deviation from target) and evidence weight (history of performance deviation from target), however, strength has a greater role (Proposition 1 - P1). Second, negative performance and events have greater influence on SCC in comparison to positive information (Proposition 2 - P2). Third, SCC is slow and unresponsive in recovering from low levels (i.e. sensitivity to SCC level; Proposition - P3).
3 Methodology This research draws on the case study strategy (Yin, 2003): a contemporary phenomenon is investigated within its real-life context. The research is explanatory in nature as our research propositions are illustrated with a multiple case study. We limit our cases around distinct events in companies and industries, and attempt to understand what has happened and why, as well as how SCC dynamics behave. Case events were selected on the basis of relevance for illustrating effects on supply chains, where seemingly major strategic implications were experienced. While the case implications are mostly concerned with shut-downs or reductions in demand or volumes, they are claimed to be causally connected to the events, and not to the general economic downturn, as is illustrated in Fig. 1. It may be concluded that the general economic context has been relatively optimistic or stable during the cases, at least during the major disruptions and their immediate implications. Therefore configuration effects, at least to a large extent, cannot be explicitly attributed to the global economic downturn. Our research utilises mostly secondary data gathered from a variety of sources. While sometimes considered a handicap, the reliance on secondary data allows open discussion on actual companies and phenomena.
Dynamic Nature and Long-term Effect of Events on Supply Chain Confidence
279
Fig. 1 The general economic context of cases, described with the Purchasing Managers Index, PMI (ISM, 2009, average calculated for 1990-2009)
4 Multiple Case Study 4.1 Case Wood Sourcing Raw material sourcing from Russia for manufacturing purposes has been common in North European countries for decades. Examples of these include such sourced items as crude oil, coal and wood logs. As pulp and paper production has been one of the key export industries in North Europe, wood imports from Russia and the East in general, have been popular. In the case of Finland, Russian wood import volumes of early 80s have increased nearly four to five times by 2005 (e.g. R¨am¨o et al, 2002; Peltola, 2008), having topped at 17 million m3 . In Sweden, Russian wood import volumes have been on the long-term increase as well (e.g. R¨am¨o et al, 2002; Loman, 2009), but in general, volume has been much more conservative: during year 2008 import in total was 1.3 million m3 . In 2005, major changes occurred in Russian export customs legislation and government proposed a stepwise increase of duties for wood export starting in 2007, first to 15 euros per m3 , and by 2009, to 50 euros per m3 (the export price of round wood in 2007 on average was 60 eur per m3 ; Torikka, 2007). Political debate over this issue between the governments of Finland and Russia started in 2006 (USDA, 2006), and the erosion of long-term sustainability of Russian wood’s cost competitiveness was already then evident among industrial decision makers. Even at the time of writing this research, political debate continues on the wood trade issue, and Russian government has halted its stepwise customs tariff programme. However, the volume of wood sourcing has continued to decline. Currently the level of wood import is approximately 35% of the peak year volumes (Fig. 2).
280
Harri Lorentz and Olli-Pekka Hilmola
Roundwood import from Russia (million m3)
19
17
15
13
11
9
7
5 2002
2003
2004
2005
2006
2007
2008
2009*
Fig. 2 Finnish roundwood import from Russia during year 2002-2009 (* denotes as forecast using first ten month data in extrapolation). Source: Finnish Statistical Yearbook of Forestry (2008)
The announcement to raise tariffs of roundwood has resulted in a number of pulp and paper manufacturing site shut-downs in Finland, and also sawn wood as well as plywood production have experienced an extremely difficult time period. Harm is not occurring only in Finland, but in Russia too. Harvesting, transportation, and sawn wood factories have lost much of their activity and working places. Crucially, the actual costs have only increased in small scale during the tariff negotiation process.
4.2 Case Trans-Siberian Railway and Europe-Bound Container Volume Collapse After the emergence of Asian economies, particularly those having strong competence in manufacturing, different parties in Europe have shown growing interest towards alternative routes and modes of transport between Asia and the European Union. In this regard, such countries as Mongolia, Kazakhstan, Russia and China hold a central role, e.g. in developing a short lead time railway connection. For North European economies in particulair, this route plays an important role as distance being saved in transportation is approximately 50%, compared to the lengthy sea route (Hilletofth et al, 2007). During the years, the railway route alternative has been popular e.g. among Japanese and South Korean CE manufacturers (Ivanova and Hilmola, 2009). Also, Swedish furniture retail giant Ikea attempted for years to arrange railway transportation connection using this route, but ended its activity without any significant success (nonetheless, Russian railway company RZD still keeps this project in their future development plans; see RZD, 2010).
Dynamic Nature and Long-term Effect of Events on Supply Chain Confidence
281
Year 2008: 643 TEU
Fig. 3 Container volumes transported via Trans-Siberian Railway (TSR) between Finland and Asia (TEU). Source: Finnish Railways
In Fig. 3 is shown the volume of containers transported between Finland and Asia, using the Trans-Siberian Railway (TSR) route alternative. Since the late 90s this alternative has been indicating continuously increasing volume development, aided by short lead time and reasonable level of cost incurred. The latter component was aided by the devaluation of Russian rouble during late 90s (due to the Asian currency crisis). However, suddenly in year 2006 transportation volumes collapsed, and in year 2008 volume decline from peak year 2004 was more than 99.5 % (Fig. 3). Trigger for this large-scale volume change was the sudden 20–40% tariff price increase of international railway transports in Russia (e.g. mentioned in annual report of Finnish Governmentally Owned Railway Operator; VR, 2007). Afterwards these price increases were lowered, however, with little effect on cargo volumes. The knee-jerk reaction to the bad news was the addition of container transportation capacity on the sea route, which further decreased the cost of sea transport, making railway transport relatively more expensive. It may be speculated that some amount of volume would have returned to the TSR, but with continuous price decreases on the sea transportation side (due to increasing capacity addition) the loss of volume was further strengthened. Currently, there is no end in sight in this regard, since global economic turmoil has decreased transportation activity overall in large-scale on the sea transportation side, and sea vessels ordered during peak of 2007–2008 are still in the delivery pipeline (e.g. see Maersk, 2009). The only positive item to strengthen the competitiveness of this route between EU and Asia is the devaluation of the Russian rouble during the recent year. However, it is still questionable whether cargo volumes of earlier peak level will be experienced in the future.
4.3 Case Elcoteq in St. Petersburg Elcoteq, an electronic manufacturing services (EMS) company of Finnish origin but Luxemburg based, has been operating in St. Petersburg, Russia, since 1997. As part of Elcoteq’s global manufacturing network, established during the expansion-
282
Harri Lorentz and Olli-Pekka Hilmola
ary years of 1998–2000, and spanning Europe, Asia and Latin America (Elcoteq, 2009), the Russian factory employed approximately 170–290 low wage, but highly skilled people until 2005 (Rantanen, 11 April 2003; Ivanova, 2005). At the time, the constraint for growth in Russia was considered to be the insufficient legal system and rigid customs procedures. However, in the summer of 2004, Elcoteq announced the plan to build a new factory in St. Petersburg, with a workforce of 1500 at full capacity. The factory was planned to become operations in late 2005, bringing production capacity of mobile devices and telecommunication network products to a more appropriate level in order to serve Elcoteq’s large telecom customers in Europe, but also the rapidly growing Russian market (Viitaniemi, 2004). On 7 October 2005, the completed Elcoteq facility became the only one of its kind in Russia, with overall planned financial investment of 100 MEUR. At the time it was the second largest foreign manufacturing facility investment in Russia, right after the Toyota plant. The company chairman of the board was quoted saying in the opening ceremony: We want to be a good partner with local Russian companies and help them in their internalization efforts. We want to help the local companies to globalize and global companies to localize (Ivanova, 2005). Analysts considered that the building of the new facility was motivated by the low labour costs in Russia and by the close proximity of Finland, i.e. the Nokia plant in Salo. The customs duties paid for importing components to Russia were also considered to be lower in comparison to finished products (Ivanova, 2005). To summarise, the new Elcoteq facility in St. Petersburg was obviously meant to play an important role in the company’s global supply chain network, sourcing globally and serving both international and domestic customers. The international inbound and outbound material flows of the St. Petersburg factory were designed to transit entirely through Elcoteq’s Material Service Center (MSC) in Finland, this facility serving as a hub for handling component deliveries from suppliers and international customer orders. Elcoteq Finland owned materials during the whole importexport process, and components and finished products were essentially re-exported to the EU from Russia. Target lead time between the MSC hub and the St. Petersburg factory was set at 24 hours, both for regular inbound and outbound runs by dedicated trucks through the Finnish-Russian border.The inbound trip included time spent in customs warehouse, with clearance taking 6–8 hours (Ollila, 2007). In December 2006, Russia implemented major changes in import and export customs clearance procedures (i.e. TNVED code and clearance form updates). From Elcoteq’s point of view, information about the changes was insufficient and late, as new licences had to be applied for. The outcome of procedure changes was that the handling of Elcoteq freight took days by Russian customs authorities to complete, mistakes in documents lead to extremely long delays, and, for example during the first three weeks of 2007, only three of Elcoteq’s export trucks made the inbound run to Russia, while the achieved outbound runs to Finland accounted for zero. As a consequence, the 24 hour lead time track record could not be maintained, material deliveries from Finland had to be stopped, and serious customer service and trust issues materialised (Ollila, 2007).
Dynamic Nature and Long-term Effect of Events on Supply Chain Confidence
283
Although hopes for more reliable lead times existed in the post-disruption period with the plans of using the green corridor concept at the border (Ollila, 2007), in February 2008 Elcoteq announced the decision to divest and sell the St. Petersburg factory to its rival, Flextronics. According to the company CFO, sustaining Elcoteq’s operations would have required the building up of 10-15 days of buffer inventories (Rantanen 2008), too much for the highly competitive and low-margin EMS industry, where success is based on low total costs and rapid time-to-market (Fargo, 2002; Ker¨anen, 2004). However, in July 2008, Flextronics cancelled the agreement to buy the facility, as negotiations with the Russian customs authorities to lower import duties on LCD TV components, tied to the completion of the deal, failed. Consequently Elcoteq stated: Elcoteq is continuing negotiations with the Russian authorities concerning certain customs practices, and will reassess its long-term strategy in Russia based on these discussions. Demand for home communications-related electronics manufacturing services on the Russian market is promising, provided that the customs practices change. (Rozhkov, 2008). At the time of writing this article, the status of the St. Petersburg factory at the Elcoteq web-site was summarised in the following words: No production, Floor space 14,700 m2 (Elcoteq, 2009).
4.4 Cross-case Analysis and the Dynamic Hypothesis In terms of P1, the cases provide evidence that a single major event, or news, have a significant effect on the SCC. In one case (Elcoteq) the operational history is relatively short, but in two other cases (wood sourcing and TSR) operations have a longer history, providing therefore stronger evidence in support of the proposition (Table 1). For P2, post-event positive news are mainly identified in the case TSR, and in this case the effect is very much on the side of marginal. In the wood sourcing case, the higher tariff regime implementation is only delayed, not cancelled, reducing the value and impact of this relatively positive news on decision makers, and further on SCC. In the Elcoteq case, the exact nature of post-event customs and supply chain performance is unknown, but it is safe to say that there are no strong positive news (Table 1). In terms of P3, the analysis is rather more difficult, as the situations described in the cases, do not normalise fully, or such assumptions cannot be made with the available information. Additionally, in the case TSR specifically, the mentioned low cost of the alternative sea route, due to high capacity availability in the economic downturn, makes conclusions difficult. What we can say, however, is that the absence of countering positive news or credible commitments from stakeholders, leave room for uncertainty, leaving SCC to remain on very low levels without much of a recovery. In summary we may conclude that empirical material provides evidence in support of the P1 and P2. For P3, however, the available data in the cases makes the final verdict somewhat ambiguous. Despite this setback, based on the established
284
Harri Lorentz and Olli-Pekka Hilmola
Table 1 Cross-case analysis Case Proposition 1 Wood sourcing ++, threat of a single event as a driver of change in industry sourcing practices TSR +, single event has a great effect Elcoteq
+, major disruption has a great effect, no history of previous disruptions
Proposition 2 +, “positive news” about delay in implementation has no effect ++, positive news on price decrease has little effect on volumes na, the exact post disruption customs performance is unknown
Proposition 3 +, in the face of uncertainty, sourcing volumes do not recover na, difficult to say due to continued low cost of the alternative route na, in the absence of credible commitments, the recovery of SCC does not take place
++: strong evidence, +: moderate/weak evidence, na: evidence not available
theory of cognitive psychology and our empirical cases, we are able to put forth our dynamic hypothesis1 on supply chain confidence, as depicted in Fig. 4. Our dynamic hypothesis identifies supply chain event level, in our cases on the negative side, as the driver of perceptions on disruption event strength (i.e. disruption severity), and event weight (i.e. disruption risk). The strength of these perceptions depends on the set performance targets and the length and nature of performance history. Through delays, which originate from the decision maker cognitive characteristics, management style and communication systems, these factors increase the supply chain confidence-stock (or the lack of it). The weights of the perceptions in the inflow equation depend on the rationality of the decision maker, e.g. how much the severity of the event dominates risk calculations. The two reinforcing feedback loops are closed, by the means of the effect of (lack of) confidence-stock level on event strength and weight perceptions, i.e. the decision maker is sensitive to the current level of SCC, in forming perceptions. Under negative event chains, the decision maker may be very sensitive to continued bad news, and the effect is reinforced, while during positive event chains, reinforced SCC growth may be much more conservative. The feedback loop and the collapse of SCC, was demonstrated in our case study as negative events, news and disruptions, resulting in strategic and often sudden downscaling of demand, volume and activity. Our hypothesis suggests that as a certain threshold is reached in supply chain (lack of) confidence accumulation, decision making processes start, lasting perhaps several planning cycles lengthened by escalating commitment and uncertainty, aiming towards strategic adjustments or configuration effects. The strength of the adjustment, ranging for example from safety inventory policy change to facility shut-down, depends on decision maker characteristics, but crucially also on the nature of the industry, e.g. its sensitivity to time and logistics reliability, as was evident in the Elcoteq case. 1
According Coyle and Exelby (2000, 39), a dynamic hypothesis is “a statement of system structure that appears to have potential to generate the problem behaviour”.
Dynamic Nature and Long-term Effect of Events on Supply Chain Confidence
285
Fig. 4 Causal diagram of the dynamic hypothesis on supply chain confidence
5 Summary and Concluding Remarks Our dynamic hypothesis, based on the literature review and the multiple case study, illustrates the dynamic nature of supply chain confidence. For example, the magnitude of the event seems to override rational calculations on event risk. In this vein, an analogy may be drawn to air travel, where the risk of being involved in a major crash is marginal, but simultaneously some travellers perceive the implications so horrendous, that they rather substitute for other transport modes, which in reality may be more risky and equally severe in accident outcome. In supply chain routing and facility location decisions, managers have a lot at stake, starting from personal reputations, let alone company fortunes, and therefore may place great emphasis in avoiding major break-downs in the chain. The suggested propositions and the dynamic hypothesis provide a platform on which further research on the psychology of SCM decision making may be built, especially in the risk management context. On the practical side, our model illustrates the importance in understanding tipping points in managerial decision making and the feasibility of operations. While for example for customs authorities, a delay of couple of days or even few hours, cannot possibly seem crucial, however, for companies in highly competitive and fast-paced industries, delays may result in severely dysfunctional business model. It is therefore important for policymakers, who aim to attract FDI and develop globally connected high-technology manufacturing clusters, to understand the possibly severe implications of minor changes in institutional frameworks.
286
Harri Lorentz and Olli-Pekka Hilmola
References Barberis N, Shleifer A, Vishny R (1998) A model of investor sentiment. Journal of Financial Economics 49(3):307–343 Bernstein P (1998) Against the gods: The remarkable story of risk. John Wiley & Sons. New York Chopra S, Meindl P (2001) Supply chain management: Strategy, planning and operation. Upper Saddle River, New Jersey Christopher M, Lee H (2004) Mitigating supply chain risk through improved confidence. International Journal of Physical Distribution and Logistics Management 34(5):388–396 Daniel K, Hirshleifer D, Subrahmanyam A (1998) Investor psychology and security market under and overreactions. Journal of Finance 53(6):1839–1885 Davis T (1993) Effective supply chain management. Sloan Management Review 34(4):35–46 Duhaime I, Schwenk C (1985) Conjectures on cognitive simplification in acquisition and divestment decision making. Academy of Management Review 10(2):287– 295 Elcoteq (2009) Company website: http://www.elcoteq.com/en Fargo M (2002) EMS guide-Managing the EMS value chain to succeed in today’s marketplace, EMS providers should focus on their core competencies, partner relationships and continuous improvement. Circuits Assembly 13(11):36–39 Geary S, Childerhouse P, Towill D (2002) Uncertainty and the seamless supply chain. Supply Chain Management Review 6(4):52–61 Griffin D, Tversky A (1992) The weighing of evidence and the determinants of confidence. Cognitive Psychology 24(3):411–435 Hilletofth P, Lorentz H, Savolainen V, Hilmola O (2007) Using Eurasian landbridge in logistics operations: building knowledge through case studies. World Review of Intermodal Transportation Research 1(2):183–201 ISM (2009) ISM manufacturing report on business. URL http://www.ism.ws/ ISMReport/content.cfm?ItemNumber=13339&navItemNumber=12958, Retrieved: Jan. 2010 Ivanova O, Hilmola O (2009) Asian companies and distribution strategies for Russian markets: Case study. International Journal of Management and Enterprise Development 6(3):376–396 Ivanova Y (2005) Elcoteq opens $ 30M plant in city. The St Petersburg Times 1112:78 Ker¨anen V (2004) EMS Industry Overview: Elcoteq presentation Koskinen P, Hilmola O (2009) Industrial shutdown in Finnish paper industry-Case study from logistics provider perspective. In: Proceedings of the 14th Cambridge International Manufacturing Symposium Little J (1961) A proof of the queuing formula L = λ W . Operations Research 9(3):383–387 ¨ Loman J (2009) Skogsstatistisk Arsbok (in Swedish, free translation Forestry Statistics¨). Tech. rep., J¨oonk¨oping: Swedish Forest Agency
Dynamic Nature and Long-term Effect of Events on Supply Chain Confidence
287
Lorentz H (2009) Contextual supply chain constraints in emerging markets: Exploring the implications for foreign firms. PhD thesis, Turku School of Economics Maersk (2009) Interim Report. URL http://shareholders.maersk.com/en/Announcements/2009/Documents/Interim%20report%202009.pdf, Retrieved: Nov. 2009 Manuj I, Mentzer J (2008) Global supply chain risk management strategies. International Journal of Physical Distribution & Logistics Management 38(3):192–223 March J, Shapira Z (1987) Managerial perspectives on risk and risk taking. Management Science 33(11):1404–1418 Milliken F (1987) Three types of perceived uncertainty about the environment: State, effect, and response uncertainty. Academy of Management Review 12(1):133–143 Morecroft J (2007) Strategic modelling and business dynamics-A feedback systems approach. Wiley & Sons, Chichester Ollila H (2007) EU-Russia import procedures-Case Elcoteq St. Petersburg: Elcoteq presentation Peltola A (2008) Finnish statistical yearbook of forestry. Tech. rep., Finnish Forest Research Institute, Vantaa R¨am¨o A, Toivonen R, Toppinen A, M¨aki P (2002) The forest sector development in Austria, Finland and Sweden during the 1970s to the 1990s. Tech. Rep. 182, Pellervo Economic Research Institute Reports Rantanen E (11 April 2003) Pietari her¨aa¨ , Talousel¨am¨a Rozhkov Y (2008) Flextronics, Elcoteq contract called off. The St Petersburg Times 1388:52 RZD (2010) New Projects. URL http://eng.rzd.ru/isvp/public/rzdeng?STRUCTURE ID=218, Retrieved: Jan. 2010 Srai J, Gregory M (2008) A supply network configuration perspective on international supply chain development. International Journal of Operations and Production Management 28(5):386–411 Taylor S (1991) Asymmetrical effects of positive and negative events: The mobilization-minimization hypothesis. Psychological Bulletin 110(1):67–85 Torikka M (2007) Putin, puutullit viev¨at ty¨opaikkojamme!: Tekniikka & Talous USDA (2006) Russia increases export tax on logs. URL http://www.fas.usda.gov/ ffpd/Newsroom/Russia Increases Export Tax on Logs.pdf, Retrieved: Jan. 2010 Viitaniemi L (2004) Elcoteq rakentaa tehtaan pietariin: Talousel¨am¨a, 28 june Van der Vorst J, Beulens A (2002) Identifying sources of uncertainty to generate supply chain redesign strategies. International Journal of Physical Distribution and Logistics Management 32(6):409–430 VR (2007) Vuosikertomus 2007 (in Finnish, free translation ”Annual Report 2007”). URL http://www.vr-konserni.fi/attachments/5gppd2hrk/5wTKXYp2i/VR VSK 2007 FI.pdf, Retrieved: Jan. 2010 Walker W, Skowronski J, Thompson C (2003) Life is pleasant–and memory helps to keep it that way! Review of General Psychology 7(2):203–210 Yin R (2003) Case study research, 3rd edn. Thousand Oaks, Sage Publications
Evaluation of Supply Process Improvements Illustrated by Means of a JIS Supply Process from the Automotive Industry Gerald Reiner and Martin Poiger
Abstract In this study, we show that some of the core aspects of supply chain management are not taken into account sufficiently by traditional evaluation systems. In particular, the functional separation of cost centers, which belong to the supply process, causes problems for evaluating supply chain improvements. We build up an evaluation model supported by process simulation to overcome these problems. This model is applied to an example from the automotive industry. We assess the effect of distance reduction and transport scheduling (frequency) within a just-insequence (JIS) supply process and we show effects of moving the customer order decoupling point (CODP).
1 Introduction Since the days of Henry Ford the automotive industry has changed radically. In particular, decreasing product life cycles and increasing product variety (Fisher and Ittner, 1999) have forced car manufacturers to reconsider their operations. Corswant and Fredriksson (2002) give a comprehensive and critical overview on how the automotive industry handles the changing market requirements. In recent years, the process perspective and customer orientation have become important topics, especially in the context of supply chain management. Unfortunately, even inside one organization it is often difficult to adopt process perspective because of the functional orientation and competing objectives of cost and profit centers. In our illustration example we observed that the functional separation of procurement and logistics Gerald Reiner (B) Enterprise Institute, Faculty of Economics, University of Neuchˆatel, Avenue A.-L. Breguet 1, 2000 Neuchˆatel, Switzerland, e-mail:
[email protected] Martin Poiger University of Applied Sciences BFI Vienna, Wohlmutstraße 22, 1020 Wien, Austria, e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 20,
289
290
Gerald Reiner and Martin Poiger
more or less prevents an integrated evaluation of different process alternatives. Therefore we want to define the prerequisites to build up an evaluation model supported by process simulation. First, we use our model to assess the effect of distance reduction and transport scheduling (frequency) within a just-in-sequence (JIS) supply process from the automotive industry. Next to the obvious effects on transportation costs by distance reduction and transport scheduling, we especially consider work-in-process (WIP) and safety stock. Second, we want to show effects of moving the customer order decoupling point (CODP), respectively the conditions, under which such movements are possible. This paper is structured as follows. Section 2 gives a literature overview about how supply chain improvements are addressed in the field of supply chain and operations management as well as in management accounting. We discuss different ways to implement and evaluate variety strategies. Furthermore we characterize the evaluation problem of process improvements that is caused by implemented performance measurement and management accounting systems. Section 3 presents the basic idea of our evaluation model for supply chain process improvement, especially the determination of the ideal position of the CODP. Section 4 illustrates our model by means of a supply chain of a selected product from the automotive industry. Finally, the last section concludes our work.
2 Motivation and Literature Review Supply chain process improvements have mainly been studied in the fields of supply chain and operations management. In the management accounting literature some aspects of supply chain improvements have also been studied. Labro (2004) gives an overview about some of these aspects, e.g., part commonalities. Activity-based costing systems (cf. Cooper and Kaplan, 1991) show, in contrast to traditional standard cost systems, that part commonality has been identified as a way to obtain end-product variety with low cost, e.g. the cost driver “number of supplier” can be reduced. Neither the activity-based costing research nor the standard accounting literature take into account the effects of the performance measures on revenues. On the other hand, many researchers in marketing research believe that revenue gains from increased product variety may exceed the additional costs of extending the product line (cf. Lancaster, 1990). In contrast, in the field of operations and supply chain management it has been shown that the reduction of inventory cost can be increased by exploiting part commonalities such that several finished goods can be produced from standardized parts, so-called product platforms in automobile industry (Desai et al, 2001). The part-commonality is only one of various ways to realize a variety strategy. Furthermore, supply chain management best practices are offered, e.g. postponement reviewed by Swaminathan and Tayur (2003). Ittner et al (1997) state that there is relatively little evidence on how the associations among the management account-
Evaluation of Supply Process Improvements
291
ing cost hierarchy levels are affected by different process designs. Especially, the process changes described below can not be evaluated with a fixed cost center structure with separate budgets and objectives. Basically, the tradeoff between inventory cost reduction and increased cost for resources depends on the positioning of the customer order decoupling point (CODP) in the supply chain process (also refered to as push/pull boundary or order penetration point) (Sharman, 1984; Hoekstra et al, 1992). In case of make-to-stock (MTS) production, the decoupling point is at the finished goods inventory, whereas in make-to-order (MTO) production it is located at the raw material inventory. If only a part of the production is carried out after arrival of a customer order we speak about assemble-to-order (ATO) production. In ATO production the production steps upstream the decoupling point are performed in MTS mode (forecast driven), the downstream steps are made to order (demand-driven) (Vorst et al, 1998). By combining MTS and MTO within one supply chain, the advantages of both the efficient and responsive (lean and agile) type of a supply chain (Fisher, 1997) are used (Mason-Jones and Towill, 1999; Naylor et al, 1999). Hopp and Spearman (2004) present clear definitions of push and pull. They emphasize that pull is essentially a mechanism for limiting work in process (WIP). Olhager (2003) identified two major factors that affect the strategic positioning of the order decoupling point, the production to delivery lead time ratio and the relative demand volatility (standard deviation of demand relative to the average demand). Clearly, if the production lead time is larger than the delivery time of a customer order, then MTO production is not possible because of poor customer service. On the other hand, MTS production, in the case of many finished goods, is not efficient because of high inventory cost. The high inventories are necessary to achieve the promised level of customer service, for MTS production mainly expressed by the fill rate. For the purpose of our research work it is necessary to be precise in the usage of the terms cycle time and lead time. In production systems with infinite capacity and no variability there is no difference between cycle time and lead time (Hopp and Spearman, 2000). The problem is that all real systems contain variability. Therefore, we will define cycle time as a stochastic variable giving the time span an individual flow unit takes to traverse a process from entering to leaving. For instance the actual cycle time between commencement and completion of a manufacturing process, as it applies to MTS products, is called production cycle time, and the actual, achieved cycle time from customer order origination to customer order receipt (all activities from the decoupling point downstream to the customer) is called order fulfillment cycle time. In contrast, lead time is specified by the management and is used to indicate the maximum allowable cycle time for an entity, e.g., production lead time (time allowed on the manufacturing process) (Hopp and Spearman, 2000). Delivery time is the allowed time to fill a customer order from start to finish. The evaluation problem can be characterized by the following example. Cost center X is responsible for the component purchasing price (allocated expenses are the manufacturing costs) and cost center Y is in charge of the logistics cost (in detail, transport cost, inventory and picking cost, sequencing cost). X relocation of the component supplier company in the neighborhood would reduce the transport time
292
Gerald Reiner and Martin Poiger
and cost as well as simultaneously the final product inventory. On the other hand the supplier company has to invest into new facilities that would increase the purchasing price. Even, if the sum of the purchasing price and logistics cost would be decreased by the new process alternative, it would not be implemented. Under the given restriction, cost center X has to bear higher costs. Furthermore, only under these circumstances (supplier or at least its warehouse is located not too far away) would it be possible to apply a postponement strategy. Consequently, the prerequisite for the following process evaluation model would be to obtain congruence between cost center and process structure.
3 Problem Formulation and Evaluation Model We summarize the basic idea of our evaluation model that determines the optimal position of the CODP by an objective function (see (1)). min p j q j + s N
∑ ul j + ∑ dk j
l∈B
,
(1)
k∈A
with N B A pj qj s ul j dk j
set of all process alternatives with combinations of B and A, index j; set of activities performed before the CODP, index l; set of activities performed after the CODP, index k; inventory carrying costs per product unit, dependent on the choice of process alternative j; average inventory product units, dependent on the choice of process alternative j; number of manufactured product units; cost rate per activity l and product unit, dependent on the choice of process alternative j; cost rate per activity k and product unit, dependent on the choice of process alternative j.
It is written as a summation of inventory carrying costs multiplied with the average inventory and activity costs multiplied with production quantity. This equation addresses a supply chain process at the activity level with a given service level (see (2) and (3)), forecast accuracy, transport schedule (frequency) as well as transport distance (between supplier and manufacturer), i.e. for each change of the transport schedule, transport distance and/or the service level the optimal position of the CODP has to be recalculated. For the downstream located production activities additional resources will be necessary to be able to deliver the customer order within the specified time (service level). Thus, the manager must determine the safety capacity such that the cycle time of the order fulfillment process is not larger than the delivery time. Usually the
Evaluation of Supply Process Improvements
293
order fulfillment cycle time will vary due to demand risks and due to operational risks. As a consequence the minimal safety time of the of the order fulfillment cycle time Td − X¯ depends on the variability of the order cycle time represented by the standard deviation σ p and the promised delivery performance d and is specified by Td − X¯ ≥ k × σ p ; Prob(X ≤ k) = d ,
(2) (3)
with k Td X¯
is the d-quantile of the order fulfillment cycle time X; and claimed delivery time by the customer; mean order fulfillment cycle time.
The cost rates include two hierarchy levels of factory operating expenses, i.e., the batch level (setups, inspection, etc.) and the unit level activities (direct labor, materials, machine costs, etc.). The calculation of these cost rates is based on performance measures which can be determined by a detailed process analysis. The possibility of exact calculations is limited by the complexity of the problem (supply chain process), and estimation usually is too imprecise. Dynamic, stochastic computer simulation can be utilized to deliver the required input for evaluation of supply chain process alternatives (Bertrand and Fransoo, 2002). In particular, the average inventory and ideal capacity utilization have to be determined under the restriction described above (service level, forecast accuracy, transport schedule as well as transport distance). The solution is determined by these very strong constraints. Therefore, it is not possible to analyze all process alternatives with combinations of B and A. We have to restrict our analysis to the possible (from the management perspective) alternatives. Following the advice of Silver (2004), we put emphasis on the evaluation of improvements, rather than optimization.
4 Illustration 4.1 Supply Process Description We want to illustrate our model by means of a cross-border supply chain of a voluminous product (furthermore called A) from the automotive industry. This supply chain consists of a supplier, a logistics service provider (LSP) and an original equipment manufacturer (OEM) in Central Europe. The considered supply chain is a “Just-in-sequence” (JIS) -process, which means that the supplier is responsible for delivering the requested parts directly to the assembly line at the right time and in correct sequence (cf. Mishina and Takeda, 1995; Liker and Wu, 2000; Corswant and Fredriksson, 2002; H¨uttmeir et al, 2009; Thun et al, 2007). The LSP is a subcontractor of the supplier and is located in an industrial park near OEM’s factory. Its
294
Gerald Reiner and Martin Poiger
duty is to sequence the variants of product A for three different car types, which are built on one assembly line. The supplier delivers product A just for one of the three car types. For the other two car types product A is delivered by different suppliers. The flow of information, especially the order placement is organized in four steps according to automotive standards (VDA 49xx). First, the supplier gets the delivery schedule weekly for about six months in advance. The first two weeks in this schedule contain confirmed amounts for the whole week of each of the different product variants, whereas just the exact day can change, and the further weeks serve for raw material scheduling. Second, the supplier gets daily the detailed delivery schedule for two weeks in advance, containing the amounts of each product variant on daily basis. The third and fourth steps are the JIS-specific delivery schedules, which are transmitted continuously. About 12 hours before part assembling, the supplier gets the information on which body shell is placed on the line. About three hours previous to the assembling of the part the supplier gets the binding JIS delivery schedule, which means that the supplier, respectively his LSP, has to provide the ordered part at the assembly line within three hours. Although the supplier has detailed and comprehensive information on the demand, he knows the exactly needed variant only three hours before the part is assembled. As in our case the production cycle time plus transport cycle time is much higher than the delivery time of three hours, claimed by the OEM, the production process of the supplier is a clear make-to-stock process, based on accurate forecasts (see also below).
4.2 Simulation Model For the development of our simulation model we used the software ARENA 7.0 and conducted three major steps. Gathering and validation of the data for defining the model was the first step. From the OEM we got process documentations and various documents as well as delivery schedules (see Appendix – Table 4) for product A for a certain period of time. In some meetings with the OEM we made sure, that we correctly understood all the data. In the second step we built up the simulation model and assured ourselves with the help of the OEM (responsible managers), whether the model was a sufficiently correct picture of reality. This model validation was based on the relevant performance measures described in detail below. To evaluate various scenarios concerning our research questions we thirdly conducted experiments with the model. Table 1 Relative frequencies of product A variants variant no. 1 2 3 4 5 6 7 8 9 10 11 12 13 Σ rel. frequency 0.01 24.16 0.02 0.03 0.01 64.16 1.58 6.08 0.14 0.34 0.09 2.98 0.40 100 [%]
Evaluation of Supply Process Improvements
295
In our simulation setting we consider a period of 22 weeks. We assume that the supplier produces according to the delivery schedule and transfers the goods to the LSP by truck. Its objective is to make sure that every JIS delivery schedule (=customer order) arriving at the LSP can be fulfilled (no stock-out situation). In our model we consider an empirical demand distribution (see Table 1) that is based on the delivery schedule according to which the supplier produces. That means that the sum of every product variant over the 22 weeks is the same like in the schedule, but the exact time and the sequence of the various variants is stochastic. Table 1 shows the relative frequencies of product A variants in percent over the observed 22 weeks. According to the estimation of the responsible manager the transportation time from the supplier to the LSP varies between 48 and 72 hours (about 2300 km). Because of lack of more detailed data on this parameter we implemented in our model a triangular distribution for the transportation time (cf. Law et al, 1991). In the real system a truck is only sent if it can be loaded completely, which means that the transport batch size is a complete truck load. For some security reasons we decided to implement in our model a fixed interval for the transport with various truck loads. We send the truck every third week, second week or once a week, with the batch size of the production amount between the transport intervals. This assumption helps us to avoid showing too high saving potential. Furthermore we assume that the products scheduled (ordered) for one particular week, will be finished by the supplier one week earlier. As mentioned above, the LSP has about three hours to allocate the scheduled part at the assembly line. Transportation from LSP stock to the OEM lasts 15 minutes, transportation batch size is two containers with 25 parts. The time between each JIS delivery schedule is the conveyor time of the assembly line, defined by Hopp and Spearman (2000) as the time the conveyor allows at each station of the line. When two containers are filled completely (50 times conveyor time), they are loaded on a small truck, driven to the OEM’s factory, unloaded at a predefined unloading point and transferred inside the factory to the appropriate cycle (station) of the assembly line (assembling point). Then empty containers from former deliveries are collected, loaded on the truck and brought back to the LSP. Figure 1 shows the aggregate flow chart of the described process (process I). The final step in modeling our supply chain was the implementation of performance measures. The main measures in our model are cycle times and inventories. These measures were used in two ways. First of all we needed them to validate the model. For example we measured the cycle time from arriving of the JIS delivery schedule to the allocation at the line. The cycle time in our model has to be smaller than the claimed delivery time. Because of the “no stock-out situation” objective, we also verified that all orders can be fulfilled by examining waiting times of JIS delivery schedules. Second, we needed the measures to evaluate the performance of the process, respectively to evaluate the various process amendments, shown in the following scenarios. In process II (see Fig. 1) we virtually change the production location. We relocate it to a site near the LSP stock, which means that we drastically shorten transportation
296
Gerald Reiner and Martin Poiger
process I 0.25 [h] T(48,60,72) [h]
process II 0.25 [h] T(2,3,4) [h]
process III 0.25 [h]
Fig. 1 Flow charts for all process alternatives
distance between supplier and LSP. The process logic remains as in process I, just the transportation time varies now between two and four hours (triangular distribution). This change in the supply chain can be achieved either by forcing the supplier to move the production or by simply choosing another supplier, already located not too far from the LSP. In process III we suggest a major modification of the process logic. We move the CODP in the supply chain towards the supplier. That means that just semifinished products are produced on the delivery schedule (make-to-stock) and the finishing and customization of product A is made after receiving the JIS delivery schedule. Figure 1 shows the flow chart of this assemble-to-order setting, whose applied strategy corresponds to the postponement concept (Van Hoek, 2001). Compared to process I we placed a stock for semi-finished goods in our supply chain and cancelled the stock for the finished product A variants. We assume that all variants of product A can be finished from one semi-finished product, and furthermore that this finishing step is technically possible within a time allowing to comply with the delivery time of three hours. Table 2 gives an overview of the base case and the eight scenarios. For the process alternatives I and II we additionally vary the transport interval between three weeks, two weeks, one week and daily. We use average inventory in the LSP stock as key metric for the evaluation of these process settings. Obviously the “no-stock-out situation” objective and the claimed delivery time were applied to these scenarios,
Evaluation of Supply Process Improvements
297
Table 2 Scenario description Base case Scenario 1 Scenario 2 Scenario 3 Scenario 4 Scenario 5 Scenario 6 Scenario 7 Scenario 8
Process I II I II I II I II III
Transport frequency one transport every third week one transport every third week one transport every second week one transport every second week one transport every week one transport every week five transportations per week five transportations per week JIS delivery schedule
and were checked carefully. As introductory mentioned we just accept solutions with given service level (meeting the delivery time).
4.3 Results and Findings Table 3 shows the inventory (minimum necessary inventory to fulfill the service level requirement) in the LSP warehouse for all scenarios. Each scenario is compared to the base case. It also shows the variation between the simulation runs. Column av displays the average value (mean), max the highest value, and min the lowest value of the runs. The coefficient of variation, the relation between the standard deviation and mean value, underlines that moving the CODP towards the supplier increases also inventory variability. IAC denotes the average increase of activity costs. Table 3 Change of inventory aggregated over all variants Base case Scenario 1 Scenario 2 Scenario 3 Scenario 4 Scenario 5 Scenario 6 Scenario 7 Scenario 8
min 91.9% 88.8% 75.1% 70.6% 69.6% 67.2% 63.7% 59.5% 47.4%
av 100.0% 97.8% 85.5% 83.1% 76.2% 74.1% 70.9% 69.4% 57.2%
min 108.1% 106.8% 97.7% 95.9% 86.4% 84.8% 84.3% 80.3% 67.1%
coefficient of variation 0.04 0.05 0.06 0.06 0.05 0.05 0.06 0.07 0.09
IAC 0.0% 0.5% 3.6% 4.2% 5.9% 6.5% 7.3% 7.6% 10.7%
All scenarios show considerable reduction of inventories at a given service level (no stock-out situation). We analyzed two aspects in detail. First, by the process amendment of substantial transport distance reduction, shown in scenario 1, 3, 5 and 7, a decrease of overall inventory by only about 1.5% to 2.4% might be achievable compared to the reference strategies base case, scenario 2, 4 and 6. Second, we
298
Gerald Reiner and Martin Poiger
investigate the influence of the transport schedule (frequency). Under a given transport distance, it is possible to reduce overall inventory by maximal 29.1% (base case to scenario 6). The major process rearrangement (process III) shown in scenario 8 attains inventory reduction by about 42.8% compared to the base case We want to emphasize that the modelling, including the necessary assumptions, were conducted very carefully to show realistic results. A limitation of the presented result is caused by the modelling of the forecasting accuracy. This high accuracy causes the extent of inventory reduction achieved by all scenarios. Finally, we are able to identify the critical cost ratio that enable a recommendation for the choice of the ideal position of the customer order decoupling point. Equation (1) shows that inventory costs (IC) and activity costs (AC) are relevant. The transport costs belong to the activity costs. In our illustration example the 20% of the costs belong to inventory. Therefore, scenario 8 with an average inventory reduction (IR) of 42.8% should be implemented if the average increase of activity costs (IAC) is lower than 10.7% (see also Table 3 and 4). IAC <
IC × IR . AC
(4)
5 Conclusion In this pape,r we show that some of the core aspects of supply chain management are not taken into account by performance measurement and management accounting systems sufficiently. We investigate that, especially, the separation of cost and profit centers that belong to the same process create problems. We build up an evaluation model that is able to support the analysis of how the position of the CODP in combination with the transport distance and the transport schedule influences the key performance measures (average inventory, etc.) and cost of the analyzed supply chain processes. Our evaluation model is illustrated by an example from the automotive industry. First, we assess the effect of distance reduction within a just-in-sequence (JIS) supply process. Second, we show effects of moving the CODP, respectively the conditions, under which such movements are possible. The results of the process simulations show that a simple transport distance reduction decreases the average inventory only by 1.5% to 2.4%. Further reaching improvements (average inventory reduction) up to 42.8% are only possible by additional modifications of the transport schedule and of the process logic (moving the CODP towards the supplier). In this context, we investigate that the forecasting accuracy has a major impact on the improvement potential that is caused by moving the CODP. These results provide very helpful decision support for management because the changes of activity costs are often well known. The demanding problem is how inventory reductions caused by process alternatives (under consideration of different
Evaluation of Supply Process Improvements
299
transport schedules) can be evaluated. Our model solves this problem and enables management to build up an evaluation model supported by simulation. Further research activities should integrate the demand process more in detail. In particular, it should be analyzed how the forecast accuracy affects the supply chain performance.
Appendix Table 4 Delivery schedule of product A
cw 1 cw 2 cw 3 cw 4 cw 5 cw 6 cw 7 cw 8 cw 9 cw 10 cw 11 cw 12 cw 13 cw 14 cw 15 cw 16 cw 17 cw 18 cw 19 cw 20 cw 21 cw 22 Σ
1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
2 117 77 62 48 76 87 72 81 114 123 152 251 0 0 209 128 119 190 261 253 240 239 2899
3 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 2
4 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 3
5 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
6 376 314 296 255 248 258 226 215 247 252 286 434 0 0 406 343 341 502 647 651 686 715 7698
Variant no. 7 8 9 2 41 0 2 36 0 2 31 0 2 18 0 5 28 2 10 34 4 6 34 1 7 23 1 9 18 0 4 31 0 8 38 0 15 35 0 0 0 0 0 0 0 16 42 1 13 40 1 8 30 2 12 41 2 15 55 1 18 53 0 19 52 1 17 50 1 190 730 17
10 0 2 3 4 2 3 1 0 0 0 1 0 0 0 1 1 2 5 6 3 3 4 41
11 0 0 0 0 2 2 1 0 0 0 0 0 0 0 0 0 1 1 0 0 1 3 11
12 14 15 13 7 13 22 20 16 8 10 20 17 0 0 20 24 23 24 24 24 24 19 357
13 2 0 1 3 5 7 5 1 0 1 1 1 0 0 2 3 3 3 3 3 2 2 48
Σ 554 447 409 337 382 427 366 344 396 421 506 753 0 0 697 553 529 780 1012 1005 1030 1050 11998
References Bertrand J, Fransoo J (2002) Operations management research methodologies using quantitative modeling. International Journal of operations and production management 22(2):241–264 Cooper R, Kaplan R (1991) Profit priorities from activity-based costing. Harvard Business Review 69(3):130–135
300
Gerald Reiner and Martin Poiger
Corswant F, Fredriksson P (2002) Sourcing trends in the car industry: a survey of car manufacturers’ and suppliers’ strategies and relations. International Journal of Operations & Production Management 22(7):741–758 Desai P, Kekre S, Radhakrishnan S, Srinivasan K (2001) Product differentiation and commonality in design: Balancing revenue and cost drivers. Management Science 47(1):37–51 Fisher M (1997) What is the right supply chain for your product? Harvard Business Review 75(2):105–116 Fisher M, Ittner C (1999) The impact of product variety on automobile assembly operations: empirical evidence and simulation analysis. Management Science 45(6):771–786 Hoekstra S, Romme J, Argelo S (1992) Integral logistic structures: developing customer-oriented goods flow. McGraw-Hill Hopp W, Spearman M (2000) Factory physics. McGraw-Hill New York Hopp W, Spearman M (2004) To Pull or Not to Pull: What Is the Question? Manufacturing & Service Operations Management 6(2):133–148 H¨uttmeir A, de Treville S, van Ackere A, Monnier L, Prenninger J (2009) Trading off between heijunka and just-in-sequence. International Journal of Production Economics 118(2):501–507 Ittner C, Larcker D, Randall T (1997) The activity-based cost hierarchy, production policies and firm profitability. Journal of Management Accounting Research 9:143–162 Labro E (2004) The cost effects of component commonality: a literature review through a management-accounting lens. Manufacturing & Service Operations Management 6(4):358–367 Lancaster K (1990) The economics of product variety: A survey. Marketing Science 9(3):189–206 Law A, Kelton W, Kelton W (1991) Simulation modeling and analysis, 2nd edn. McGraw-Hill New York Liker J, Wu Y (2000) Japanese automakers, US suppliers and supply chain superiority. Sloan Management Review 42(1):81–93 Mason-Jones R, Towill D (1999) Using the information decoupling point to improve supply chain performance. International Journal of Logistics Management 10(2):13–26 Mishina K, Takeda K (1995) Toyota motor manufacturing, USA, Inc. Harvard Business School Case 9-693-019 Naylor J, Naim M, Berry D (1999) Leagility: integrating the lean and agile manufacturing paradigms in the total supply chain. International Journal of Production Economics 62(1-2):107–118 Olhager J (2003) Strategic positioning of the order penetration point. International Journal of Production Economics 85(3):319–329 Sharman G (1984) The rediscovery of logistics. Harvard Business Review 62(5):71– 79 Silver E (2004) Process management instead of operations management. Manufacturing & Service Operations Management 6(4):273–279
Evaluation of Supply Process Improvements
301
Swaminathan J, Tayur S (2003) Models for supply chains in e-business. Management Science 49(10):1387–1406 Thun J, Marble R, Silveira-Camaros V (2007) A conceptual framework and empirical results of the risk and potential of just in sequence: a study of the German automotive industry. Journal of Operations and Logistics 1(2):1–13 Van Hoek R (2001) The rediscovery of postponement: a literature review and directions for research. Journal of Operations Management 19(2):161–184 Vorst J, Beulens A, Wit W, Beek P (1998) Supply chain management in food chains: Improving performance by reducing uncertainty. International Transactions in Operational Research 5(6):487–499
Information Needs for Decisions on Supply Chain Design Stefan Seuring and Tino Bauer
Abstract Supply chain (re-)design requires information often not available from standard information systems and information needs are not defined ex-ante. Therefore a flexible approach for management control systems is needed which allows for providing decision-support even in complex, not well-structured decision situations. The study is based on a single case study only. This should help to gain insights on what kind of information is required and how this is used in decisions regarding supply chain design. The results of this paper highlight the need for managers to be aware which information sources they used in (re-)designing their supply chains. The paper outlines how information is viewed in supply chain management by taking up a perspective which is usually presented in management accounting. The paper offers a first framework to systemize related information needs and relates them to different supply chain (re-) design decisions.
1 Introduction The availability of the “right” information plays a major role at (re-)designing supply chains “[. . . ] to anticipate where in the supply chain lucrative opportunities are likely to arise and to invest in the capabilities and relationships to exploit them. . . ” (Fine, 1998, p. 76). Supply Chain Management does not exist per se, but has to be established among the partners by management activities to realize its benefits like lower costs and improved customer value (Mentzer et al, 2001, pp. 7 and 12). In this respect a supply chain management process has to be followed (Childerhouse et al, Stefan Seuring (B) University of Kassel, Department of International Management, Steinstr. 19, 37213 Witzenhausen, Germany, e-mail:
[email protected] Tino Bauer FTI Consulting Deutschland GmbH, Maximilianstrasse 54, 80538 Muenchen, Germany, e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 21,
303
304
Stefan Seuring and Tino Bauer
2002, p. 676f.). This supply chain design and management process can be reduced to the two main phases: strategy formulation (planning) and strategy implementation (Bechtel and Jayaram, 1997, p. 20ff.). Management has to make decisions in both of these phases while (re-)designing a supply chain. Recent literature often focuses on optimizing existing supply chains by improving coordination of information and material flows among partners, rather than concentrating on the initial implementation of supply chain management (Seuring, 2009). Information needs and information supply have to be fulfilled based on the respective decision to be made. This provides a link to management accounting (Atkinson et al, 2007) and management control systems. Related information can be distinguished in primary information and secondary information (Friedl, 2003; Seuring, 2006). Primary information serves in fulfilling operational processes and is usually even mentioned in definitions on supply chain management which alludes to the management of material and information flows. “Management accounting information is intended to meet specific decision-making needs at all levels in the organization” (Atkinson et al, 2007) and the supply chain (Jeschonowski et al, 2009). The latter one is in particular true if supply chains are seen as the primary level of analysis as is usually claimed in related publications aiming to describe the “core” supply chain management (see e.g. Mentzer et al, 2001; Handfield and Bechtel, 2004). Such decisions are quite different from routine decisions. They can be characterized as innovative and constitutional and often lead to fundamental change within the supply chain and even within and among participating companies. Therefore information needs cannot be anticipated before the decision has to be taken and cannot be obtained ad-hoc based on existing information (technology) systems. Supply chains as the unit of analysis are rather neglected in this respect (Berry, 1997; Seal et al, 2004). Moreover, information needs have to be reduced to a manageable amount. The often aimed for “complete” information which is neither theoretically nor economically achievable. This brief problem statement shows that a great deal of decision-support is needed at supply chain (re-)design processes. The mentioned information needs to play a key role in the related decision process. This paper aims to identify the information needs at supply chain (re-)design decisions. The question is raised how information is supplied within the supply chain decision process. Starting from constitutive supply chain management decisions at supply chain (re-)design activities, the following questions are raised: • How are information needs analyzed and defined within the supply chain (re-)design process and what functions do they fulfill? • Which instruments are used to supply and aggregate related information? • Who is responsible for the information provision within the supply chain or focal company? These questions are rarely discussed, in particular if empirical examples are taken into account. Therefore this paper deals with the question, how information supply is fulfilled at the (re-)design of supply chains. As a first step, inter-organizational information sharing is addressed. Next, previous research on supply chain management and management control systems’ literature which focus on the decision process in
Information Needs for Decisions on Supply Chain Design
305
supply chain management is summarized. Based on this, a conceptual framework is proposed, which links decisions on supply chain (re-)design to the related functions, instruments and institutionalization of management control systems. Four case studies have been conducted in German companies in the chemical and pharmaceutical industry, while only one is presented in this paper. This example is used to analyze which and how information is used in supply chain (re-)design processes. The research methodology applied is presented and the findings are outlined. This is followed by a discussion and a conclusion.
2 Literature Reviews and Conceptual Framework 2.1 Supply chain (re-)design By no surprise, publications on the design of the “right” supply chain (Fisher, 1997) form one core part of related research. Transferring the concept of focused factories on supply chain management (Skinner, 1974) and taking product or market characteristics into account which influence the proper supply chain design (Fisher, 1997; Lee, 2002; Towill and Christopher, 2002; Childerhouse et al, 2002), it can be concluded that decisions on supply chain design can be systemized based on the product life cycle phase, in a similar manner as Hayes and Wheelwright (1979) have done for production systems. In a simplified approach, three phases will be distinguished, which are (1) the pre-market or product design phase, (2) the market phase, where production and logistics take place, and (3) a post-market or product return phase. This constitutes a first dimension to structure related supply chain design decisions. To structure management activities in the supply chain environment, the decision process often is explained by two major phases: strategy formulation (planning) and strategy implementation (Bechtel and Jayaram, 1997, p. 20ff.) or supply chain configuration and operations (Cooper and Slagmulder, 2004; Seuring, 2009). During the strategy formulation phase, procedures, tools, skills and organizational structures which help to establish a “supply chain orientation” (Mentzer et al, 2001, p. 11f.) have to be defined. This “supply chain orientation” focuses on the optimization of the entire supply chain rather than on functional sub-optimization. Configuration decisions which have been made at a particular point from a functional and company-centered view, respectively, will affect actors upstream and downstream in the supply chain. The impact of such process and structure related configurations on upstream and downstream functions and their performance must be evaluated (Eltantawy, 2008). Based on this evaluation, key process issues can be identified and re-engineered (Bechtel and Jayaram, 1997, p. 21f.). Within the strategy formulation process, specific markets and product groups have to be identified and e.g. orderwinner and order-qualifier characteristics can be used to get information about the competitive situation. Then a holistic supply chain strategy should be developed and objectives like service, quality, cost or lead-times should be defined and prioritized.
306
Stefan Seuring and Tino Bauer
Products or channels can be categorized into clusters with similar characteristics. Following this categorization, facilities and processes can be designed according to these desired objectives (Childerhouse et al, 2002). In the strategy implementation phase, the processes are analyzed and improvements implemented e.g. by use of a process reference model (Bechtel and Jayaram, 1997, p. 22f.). The process requirements are described in more detailed level and control mechanisms are defined at each level (Childerhouse et al, 2002, p. 676). Hence, the two phases’ configuration and operation summarize distinct decision for supply chain design. This can now be integrated with the product life-cycle dimension elaborated above. These two dimensions are integrated into the product-relationship-matrix. For reasons of simplicity, the framework can be shortcut into four distinct phases for the subsequent discussion. The fields III and V comprise similar decisions as much as IV and VI, while the latter one focus on return and recycling issues (see Fig. 1).
Fig. 1 Decision field in SCM: the product-relationship-matrix (Seuring, 2009, p. 225)
The product-relationship-matrix forms the part of the framework which is concerned with the design of the supply chain. Related information needs can be structured based on management control systems.
2.2 The Role of Management Information in Supply Chain (re-)Design Decisions Atkinson et al (2007) mention the important role of management information to guide management action. They define Management Accounting as “a value adding continuous improvement process of planning, designing, measuring and operating both non-financial information systems and financial information systems that guides management action, motivates behavior, and supports and cre-
Information Needs for Decisions on Supply Chain Design
307
ates the cultural values necessary to achieve an organization’s strategic, tactical and operating objectives”. Management information has to support strategic (planning), operational (operating) and control (performance evaluation) management decision making and is intended to meet specific decision-making needs at all levels in the organization. Related information is a key source of information for decision making, improvement, and control in organizations. Effective management control systems can create considerable value to today’s organizations by providing timely and accurate information about the activities required for their success (Atkinson et al, 2007). As mentioned, in supply chain management the scope of management control systems has to be extended from a company centered view to an inter-organizational perspective (Cooper and Slagmulder, 2004; Seal et al, 2004; Jeschonowski et al, 2009). Such management accounting and control systems can be described as distinguishing three dimensions (see e.g. Friedl, 2003, p. 54): (1) the institutional perspective refers to who collects related information and aggregates it towards decision making, (2) the functional component relates to what specific purpose-related information should fulfill, while (3) the instrumental dimension deals with the specific management accounting techniques or instruments being applied in the respective case. Such concepts can be transferred from the individual company to the supply chain level (Handfield and Bechtel, 2004; Seuring, 2006).
2.3 Information Needs for Supply Chain Design After having outlined the decision process in supply chain management a closer look should be taken at the decision objects. The specific problem of management control systems in supply chain management is to provide decision-specific information to the supply chain managers in the strategy formulation and in the strategy implementation phase. The direct objective of management control systems is the implementation of supply-chain-wide information systems to (re-)design the supply chain according to its customer oriented effectiveness and efficiency objectives. Starting from the specific problem and the objectives, the functional, the instrumental and the institutional components of the management control system have to be adjusted to take supply chain management related issues into account. Table 1 summarizes the previous discussion where the management process of the decision forms the starting point taken up on the first level. The second level is derived from the supply chain design decision and therefore builds on the fields of the productrelationship-matrix. As the third level the three descriptive dimensions of management control systems are taken up. The framework presented in Table 1.
308
Stefan Seuring and Tino Bauer
Table 1 Conceptualising management control for supply chain design Level 1: Management process
Level 2: Level 3: Supply chain Management information provision design Management information system decisions Information Information Function Instruments Institution needed needs of the SC manager 1. Strategy I. Strategic Consult/support Supply chain Typical formulation configuration management valuation alternatives: of product/ process: Define • de-central network common goals; (at individual identify critical partners), product and client groups; define • central critical members of (via the focal the supply chain company), III./ V. Formation Achieve “supply SC-Process mapof the production/ chain orientation” at ping, Identification • team-based, return network the actors of central (one weak links team with members from Project + the partners) change management • contract manufacturers management team 2. Strategy II. Product design Manage and control Intercompany implementation in the supply the implementation performance chain of the strategy measurement IV./VI. Process Build systems which system, SC optimization in support the new costing/ SC activthe supply/return defined processes ity based chain costing; relationship management
3 Research Methodology For the research question addressed, a qualitative case based approach seemed justified. Hence, a single-case study design was chosen to test the conceptual framework (Voss et al, 2002). This can be seen as part of the theory building process. The case was part of a project inside the company where one author had access as a participant. Still, the formal methods and rules for data collection and analysis were obeyed. Additional information was gathered particularly aimed at the research question addressed here.
Information Needs for Decisions on Supply Chain Design
309
The process of case study research has been described by different authors (Yin, 2003; Eisenhardt, 1989). Here, the five steps outlined by Stuart et al (2002) will be used as a blueprint to structure the description of the overall research process. The five stages will be described offering an overview of the Kneipp case study. 1. Research question: the research question, as mentioned above, was addressed in a single case research. The basic aim is to test the framework developed. 2. Instrument development: as a starting point, a single case study was chosen. For the empirical data needs and the detailed insights into the companies and supply chains, access to the field was required. This was possible for one author by being involved in project work with the company. As a further step, data collection took place at more than one embedded unit. Seuring (2008) assessed nearly 300 papers on supply chain management towards the research approach taken. Only in a total of 19 examples, 17 of which are case-based research, data collection took place at more than one company of the supply chain. As will be outlined in the case study, data collection at the focal companies and critical suppliers or customers was carried out. 3. Data gathering: a range of instruments was used. This includes workshops, semistructured interviews (mix of open, closed questions and narration); site visits (direct observation) and process modeling. Data collection was conducted from March to June 2004. As already mentioned, one of the researchers was part of the project team. Hence, participatory observation and taking field notes were also involved. While the research formed a “normal” team member, the additional role as researcher has been open to the other participants. A rich background of information could be collected this way. 4. Data analysis: again, a mix of techniques was employed. Interviews were transcribed and workshop protocols were written. Further, the information collected was reviewed by the interviewee or selected participants. The process models which usually captured the current status for the supply chain and usually the future development also, were reviewed by key informants as well. Evidence from different sources allows for triangulation of the data sources. This was in particular true, if data from suppliers or customers were involved as staff members from these companies usually had a more independent opinion. 5. Dissemination: the case is used to validate the conceptual framework and discuss how information needs for supply chain design decisions are met. After this overview, the case study will be described.
4 Supply Chain Redesign at Kneipp Kneipp (www.kneipp.com) produces consumer health and cosmetic products. Kneipp carries out its worldwide activities from W¨urzburg, Germany, where the company was founded in 1891. Today it is part of Hartmann AG, Heidenheim, Germany. In 2009, the company had about 350 employees and a turnover of 70 Mio Euros. Subsidiaries are located in USA, Switzerland and in the Netherlands. The foreign
310
Stefan Seuring and Tino Bauer
activities have expanded continuously since 2001. In the meantime, products for health and well-being are being distributed in France and Japan. The pharmaceutical and consumer health industry is characterized by dynamic supply chains, where tight coordination is needed. In this dynamic environment Kneipp has to work continuously on improving its supply chain.
4.1 Supplier and Customer Network Kneipp distributes its finished products via drugstores, pharmaceutical wholesalers, retail stores, mail order retailers and since 2006 via direct channels, via export by subsidiaries in foreign countries and via pharmacies. Moreover Kneipp produces products for Hartmann AG. The products are produced at two sites in Bad W¨orishofen and Ochsenfurt-Hohestadt, both in Germany. Contract manufacturers are responsible for certain steps in the production process. Raw materials, active ingredients and creative services for packaging design are sourced from external suppliers. Packaging materials are sourced indirectly via Hartmann AG (see Fig. 2 for an overview of the respective supply chain).
Suppliers
Focal company
Hartmann AG
Customer Hartmann AG
Suppliers focus especially on: packaging material and packaging design suppliers
Contract manufacturers
Kneipp
Customers pharmacies pharma whole sellers drug market chains
Fig. 2 The health-care supply chain in the scope of the Kneipp case study and related data collection points
4.2 Goals for Restructuring the Supply Chain at Kneipp Primary goal of the supply chain management initiative at Kneipp was to improve delivery reliability and reduce process cycle times. In the past promised delivery dates were missed not only on the introduction of new products but also products which were well established could often not be delivered to retailers on the promised date, too. The subsequent sections will use the framework to discuss the specific information needs of the single phases of the supply chain design process and relate them to function, instruments and institutional solution observed.
Information Needs for Decisions on Supply Chain Design
311
4.3 Strategy Formulation – I. Strategic Configuration of Product and Network Based on participatory observation and document analysis the following information needs could be identified: in this phase, the most important product and customer groups were defined based on information about revenue, growth and customer requirements. Three business fields with homogenous customer requirements were identified: retail (e.g. drugstores, retail stores, and pharmaceutical wholesalers), export and pharmacies. Starting from the customer requirements, order winners and order qualifiers were identified and prioritized. Subsequently, these processes were localized which are responsible for the order winning and order qualifying factors, and current weaknesses were highlighted. One function of management control was to provide the methodology to break down the pre-defined goal of optimizing the delivery reliability. Another function was to define the information needed, to prepare the information, to moderate the management workshop with the goal to discuss the information with the participants and to agree on common process goals. For analyzing the information the following management control instruments were applied: using a business field matrix, homogenous business fields were put together. A business field matrix structures the total revenue and its projected development by two dimensions: major market channels and major product groups. Business field-specific order winning factors were defined and prioritized. In the next step, the impact of each business process on these factors was evaluated (supply chain valuation). Based on the order winners, the most important actors and processes were determined. Information supply from internal and external parties, delivery reliability of parties up-and downstream in the supply chain, flexibility – especially on new product introduction and at the order management of the foreign subsidiaries – were ranked as critical for success. The processes order to cash, customer relationship management, new product introduction and production (incl. contract manufacturing) were ranked with a large impact to fulfill these order winning factors. Process goals were defined for each of the core processes. A crossfunctional project team with members of all critical parts and processes of the supply chain was formed. Responsibility for the information provision rested with the pre-defined supply chain management control team.
4.4 Strategy Formulation – III. Formation of the Production/ Return Network To gather data about the second decision area, semi-structured interviews (mix of open, closed questions and narration) were conducted with the members of the cross-functional project team and site visits were conducted (direct observation). To answer the question which information was demanded, the material used in the
312
Stefan Seuring and Tino Bauer
management process (information about cost, quality, time, weak points and action points) was investigated and analyzed. The function of the management control team was to identify weak points in existing processes, evaluate them against the process goals which have been defined in the phase “strategic configuration of product/ network” and prioritize the impact of the as-is situation. After presenting the information to the cross-functional project team, the definition of action points was moderated by the management control team. The management control team used instruments like individual interviews with the person in charge for each process. The results were documented in a web-based process modeling tool to highlight which sub-processes were responsible for not reaching the defined process goals. Weak points were identified which were implemented in phase IV “process optimization in the supply/ return chain”. A central management unit “supply chain management center” was institutionalized, where the functions collaborative demand planning with key customers, disposition of raw and packaging material with key suppliers, order and stock management for finished goods, high-level production planning, and management of contract manufacturers were integrated. The procurement department for packaging material was re-integrated from an external partner. A supplier performance management system was set up and the procurement departments of all sites were integrated. Finally the sales process which was performed by a third party was reintegrated to the focal company, too. To get a holistic understanding of the relevant part of the supply chain, temporarily all members of the cross-functional team were put together to agree on action points. Later, these action points were detailed and handed over to smaller teams.
4.5 Strategy Implementation – II. Product Design in the Supply Chain Data gathering in the strategy implementation phase was conducted by document analysis and participation at steering committee meetings. The function of supply chain management control team was to manage and control the implementation in order to stick to the defined timelines and to achieve the committed goals. In the implementation phase, instruments for organizational design were used. Therefore the functional structure is now overlaid by a responsible person for the introduction of one product from the packaging material design to category placement at the retailer. A process was institutionalized as to how new products can be developed with new suppliers. The product management for local and international markets was standardized. These new defined processes were published in a webbased process modeling tool where responsibilities and activities are outlined per person. An area for process improvements was made available online, and a process for change was established. The operational responsibility for implementing the defined changes
Information Needs for Decisions on Supply Chain Design
313
was with the international marketing and sales team. The management control team was in charge of monitoring timelines and results.
4.6 Strategy Implementation – IV. Process Optimization in the Supply Chain Like in the product design phase, data was gathered by document analysis and participation at steering committees. The function of the supply chain management control team was not only to manage and control the implementation of the defined action points but also to build up a supply chain performance management system to monitor whether the planned goals were reached. Therefore a supply chain performance management system based on selected key performance indicators which were deducted from the process goals was used as the related instrument. Such a system was designed and implemented to monitor the effects of (1) the integration of the procurement of packaging materials, (2) the new defined sourcing process, (3) the demand planning with key customers and collaborative prioritization of orders in the case of non-availability (4) the new integrated disposition and stock management process, and (5) the new integrated planning and control system for contract manufacturers on the overall goal to improve delivery reliability. The responsibility for the monitoring was with the supply chain management control team who reported related improvements to the CEO.
4.7 Results The realization of the defined initiatives (strategy implementation) leads to a significantly faster market introduction of new products. In the past, new product introduction was organized by product lines. Today one project manager is responsible for the new product introduction process. The change from the department perspective to a project organization helped to reduce the effort to introduce a new product by up to 30%. Significantly less co-ordination is needed. An organizational unit, supply chain management center, was established. This unit encompasses the former functional units, demand planning, disposition, stock management, production planning (internal and external), order management and management and control of the contract manufacturers. The sourcing process was redesigned for new materials as well as for existing ones. Contract manufacturers were integrated by defining universal decision and information structures. The total organizational structure of Kneipp was reduced to three main business processes: customer and product management, operations, supply chain management. These processes were configured differently according to the phase of the product life cycle.
314
Stefan Seuring and Tino Bauer
To ensure the sustainability of the new organizational structure and the new business processes the functional unit “Management Information/Process Management” was put in charge to manage and control the strategy implementation. Moreover a continuous improvement management was established, based on the defined and web-based documented business processes. Finally a supply chain performance management system was implemented. The resource intense, cross functional approach lead to systems-thinking by all participating parties.
5 Discussion This paper fills a research gap on supply chain design decisions. Previous research has hardly addressed the question what kind of information is used inside companies when making decisions on supply chain (re-)design. One core framework presented is the product-relationship-matrix which structures related decisions. This offers an alternative approach to various concepts where different types of supply chains are distinguished (Fisher, 1997; Childerhouse et al, 2002; Lee, 2002). It highlights the importance of the decisions to be made during the overall process. This is well in line with different propositions made in related publications on supply chain design, but condenses the process into one “holistic” approach. Such a descriptive attempt needs to be tested in various cases (Seuring, 2009) and further empirical research which is one of the limitations. In a second step these decisions are analyzed building on management accounting thought, where decision-focused information provision is usually seen as one core activity. While the theoretical contribution (see Table 1) is a combination of supply chain and management accounting concepts, the values of the empirical research derives from the fact that insight is gained on how related information provision can be assessed in a structured manner. Distinguishing between function, instrument and institution allows understanding related information in a comprehensive manner. Of course, building on one case only, the insights gained so far are just a first exploration. Ideas from management accounting are therefore applied in a supply chain environment, as yet another example of how supply chain management can “borrow” from other disciplines (Handfield and Bechtel, 2004).
6 Conclusion Supply chain design and related decisions form one core topic of research on supply chain management. Yet, the informational basis which is used for making related decisions, irrespective of whether this is handled by a focal company or jointly by various partners of the supply chain has so far rarely been addressed. Such information needs are fulfilled by management accounting and control data. Hence, related thought is taken up to outline a framework that systemizes information needs for
Information Needs for Decisions on Supply Chain Design
315
supply chain design or configuration decisions. In this respect, we bridge the gap to management accounting literature. Previous research rather provides normative or descriptive approaches to supply chain design and was primarily focused on instruments. An integrated framework which links supply chain (re)design decisions and related information needs to the functional, instrumental and institutional components of a management control system is rarely provided. This paper offers a first step towards this integration. While this might be helpful for supply chain managers, the first insights of our research offered in this paper pro-vide an alternative view on related decisions. More cases should be analyzed to compare the different information needs in supply chain (re-)design decisions.
References Atkinson A, Kaplan M, Matsumuta E (2007) Management accounting. Prentice Hall, Upper Saddle River Bechtel C, Jayaram J (1997) Supply chain management: A strategic perspective. The International Journal of Logistics Management 8(1):15–34 Berry A (1997) The consequences of inter-firm supply chains for management accounting. Management Accounting 75(10):74–76 Childerhouse P, Aitken J, Towill D (2002) Analysis and design of focused demand chains. Journal of Operations Management 20(6):675–689 Cooper R, Slagmulder R (2004) Inter-organizational cost management and relational context. Accounting, Organizations and Society 29(1):1–26 Eisenhardt K (1989) Building theories from case study research. Academy of management review 14(4):532–550 Eltantawy R (2008) Supply management contribution to channel performance: A top management perspective. Management Research News 31(3):152–168 Fine C (1998) Clockspeed-based strategies for supply chain design. Production and Operations Management 9(3):213–221 Fisher M (1997) What is the right supply chain for your product? Harvard Business Review 75(2):105–116 Friedl B (2003) Controlling. Lucius & Lucius, Stuttgart Handfield R, Bechtel C (2004) Trust, power, dependence, and economics: Can SCM research borrow paradigms? International Journal of Integrated Supply Management 1(1):3–32 Hayes R, Wheelwright S (1979) The dynamics of process-product life cycles. Harvard Business Review 57(1):127–136 Jeschonowski D, Schmitz J, Wallenburg C, Weber J (2009) Management control systems in logistics and supply chain management: A literature review. Logistics Research 1(4):113–127 Lee H (2002) Aligning supply chain strategies with product uncertainties. California Management Review 44(3):105–119
316
Stefan Seuring and Tino Bauer
Mentzer J, DeWitt W, Keebler J, Min S, Nix N, Smith C, Zacharia Z (2001) Defining supply chain management. Journal of Business Logistics 22(2):1–25 Seal W, Berry A, Cullen J (2004) Disembedding the supply chain: Institutionalized reflexivity and inter-firm accounting. Accounting, Organizations and Society 29(1):73–92 Seuring S (2006) Supply chain controlling: Summarizing recent developments in German literature. Supply Chain Management: An International Journal 11(1):10–14 Seuring S (2008) Assessing the rigor of case study research in supply chain management. Supply Chain Management: An International Journal 13(2):128–137 Seuring S (2009) The product-relationship-matrix as framework for strategic supply chain design based on operations theory. International Journal of Production Economics 120(1):221–232 Skinner W (1974) The focused factory. Harvard Business Review 54(3):113–121 Stuart I, McCutcheon D, Handfield R, McLachlin R, Samson D (2002) Effective case research in operations management: a process perspective. Journal of Operations Management 20(5):539–550 Towill D, Christopher M (2002) The supply chain strategy conundrum: To be lean or agile or to be lean and agile? International Journal of Logistics Research and Applications 5(3):299–309 Voss C, Tsikriktsis N, Frohlich M (2002) Case research in operations management. International Journal of Operations and Production Management 22(2):195–219 Yin R (2003) Case study research: Design and methods. Sage Publications, Thousand Oaks
A Conceptual Framework for the Integration of Transportation Management Systems and Carbon Calculators Stefan Treitl, Heidrun Rosiˇc and Werner Jammernegg
Abstract Greenhouse gas emissions produced by supply chain processes such as manufacturing, warehousing, or transportation have a huge impact on climate change. Hence, they are the focus of possible future regulations introduced by (inter)national institutions. In particular, transportation processes play a decisive role in supply chains and are responsible for a significant amount of greenhouse gas emissions. Therefore, many companies try to quantify the amount of emissions caused by their transportation activities. At the moment, several tools for the calculation of greenhouse gas emissions, so called carbon calculators, are available but their results vary to a large extent depending on the input data, the parameters included, and the methodology used. Especially real time data like traffic conditions or driving habits are not taken into account although they affect the result significantly. For that purpose we present a conceptual framework for the integration of real time data and carbon calculators by linking greenhouse gas emission data with Transportation Management Systems. By doing so, the accuracy of emission estimates from a carbon calculator can be improved.
Stefan Treitl (B), Heidrun Rosiˇc and Werner Jammernegg WU Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna, Austria, e-mail:
[email protected] Heidrun Rosiˇc e-mail:
[email protected] Werner Jammernegg e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 22,
317
318
Stefan Treitl, Heidrun Rosiˇc and Werner Jammernegg
1 Introduction Transportation processes are essential parts of the supply chain as they perform the flow of materials that connects a company with its suppliers and with its customers (Fleischmann, 2005). Only by the appropriate and well-defined use of transportation can a supply chain be successful (Chopra and Meindl, 2004). To support participants of a supply chain in decision-making with regard to transportation, information and planning systems called Transportation Management Systems (TMS) can be used. A TMS enables companies to optimize their transportation activities and related tasks, i.e. for example route planning and status tracking (G¨unther and Seiler, 2008). Transportation processes are responsible for emitting a considerable amount of CO2 and other greenhouse gases, thus having a huge impact on climate change. The discussion to include transportation in possible future regulations introduced by (inter)national institutions gains more and more importance. Having said this, lots of companies are now trying to quantify the actual amount of greenhouse gas (GHG) emissions caused by their transportation activities. At the moment, several tools for estimating GHG emissions from transportation, so called carbon calculators, are available. But their results vary to a large extent, depending on the input data, the parameters included, and the methodology used. Furthermore, real time data such as weather conditions or traffic conditions are usually not considered, making it hard to quantify the actual GHG emissions of a certain transportation process precisely. Therefore, we present a conceptual framework of how the accuracy of GHG emission estimates can be improved by integrating carbon calculators and TMS. Consequently, we are able to consider actual events like accidents, congestion, or varying weather conditions when estimating GHG emissions. The remainder of this work is structured as follows. In Section 2 we provide insights into state-of-the-art Transportation Management Systems and describe their functions, scope, and limitations. We take a close look at carbon calculators in Section 3 and assess theirs calculation methodologies and the data used. The presentation of our framework in Section 4 is followed by conclusions and opportunities for further research in Section 5.
2 Transportation Management Systems According to G¨unther and Seiler (2008) a TMS is a “software used to manage transportation planning and execution”. The main objectives of a TMS are “to plan freight movements, select the appropriate route and carrier, and manage freight bills and payments” (Gartner, 2010). Additionally, the facilitation of the procurement of transportation services and the execution of transportation plans with continuous analysis and collaboration are also considered functions of a TMS (Helo and Szekely, 2005). Due to the vast amount of vendors offering TMS there is a high number of available TMS solutions. They are usually offered in various deployment models, for instance as on-premises installation or managed services.
A Conceptual Framework for the Integration of TMS and Carbon Calculators
319
Recent developments have proven that the customers would prefer web-based solutions, which means that parameters have to be entered into a web-interface and the vendors then generate the results using a TMS on their own servers. In this model which is also referred to as “Software-as-a-Service” customers do not face any implementation costs but pay a service charge (Partyka and Hall, 2010). Despite the remarkable size of the existing TMS market with global market sales of $1.2 billion in 2007, it is expected to grow even further, exceeding $1.6 billion in 2012 (ARC Advisory Group, 2008). Figure 1 gives a schematic overview of the architecture and the functions of a state-of-the-art TMS. The main goal of a TMS is to support companies in matching the demand for transportation with accessible and available transportation capacities. In order to achieve this, TMS are usually capable of carrying out transportation planning, the tracking and tracing of carriers and goods, and the handling of possible exceptions during transportation. Regardless of the vast amount of different TMS software available, these functions are understood to be main functions of a TMS. Other functions are, for example, freight billing and order management, though these functionalities are not available in all TMS. According to Partyka and Hall (2010), there is a strong trend of connecting TMS with on-the-road-navigation thus providing a combined solution that both routes and navigates the vehicle. This, however, requires the TMS to be linked at least with GPS-satellites in order to do a proper navigation. The main functions of a state-of-the-art TMS, transportation planning, tracking and tracing, and exception management, are now described in more detail.
Fig. 1 Architecture and functions of a TMS, based on G¨unther and Seiler (2008)
320
Stefan Treitl, Heidrun Rosiˇc and Werner Jammernegg
Transportation Planning Transportation planning usually comprises short-, mid- and long-term planning tasks, although medium- and long-term planning tasks are, strictly speaking, no key functions of a TMS. Nevertheless, a lot of providers include such capabilities in their applications. First, long-term or strategic planning tasks are mainly concerned with network design such as determining the optimal number and location of warehouses or distribution centers. Second, mid-term or tactical transportation planning is concerned with the creation of master routing schedules for major transport relations (G¨unther and Seiler, 2008). Furthermore, tactical decisions need to be made primarily concerning route and type of service to operate, general operating rules for each terminal and work allocation among terminals, traffic routing using the available services and terminals, or the repositioning of resources like empty vehicles for use in the next planning period (Crainic and Laporte, 1997). Third, short-term or operational planning tasks consist of various decisions and are usually executed on a daily basis. The main decisions are the routing and dispatching of vehicles and resource allocations. This usually means determining the shortest or fastest path between several points through a network.
Tracking and Tracing If the quality of a transportation process were perfect, no tracking and tracing information would be needed. All parties involved would be able to solely rely on the agreed plan. In real life, however, there is no perfect process and therefore tracking, i.e. following the shipment, and tracing, i.e. finding the shipment, are required. In order to track and trace a shipment which travels from consignor to consignee it is vital to link the physical transportation system with a company’s information system (Stefansson and Tilanus, 2000). The basis for the tracking and tracing of transportation processes is the surrounding IT infrastructure of TMS. In this respect, a classification according to the technology used in order to identify entities can be made. On the one hand, identification may be done by human- or machine-readable barcodes or RFID tags to enable the tracking of an entity at discrete times and places. For example, tracking takes place when the haulier receives the shipment from the shipper (proof of acceptance) or the shipment is delivered to the receiver (proof of delivery). The information about the status of a shipment is then used to set appropriate actions. On the other hand, tracking and tracing is possible with the help of broadcasting systems where data is transmitted and received via satellites (e.g. GPS).
Exception Management Exception management means dealing with unplanned events and uncertainties. With respect to TMS, the focus of exception management mainly lies on deviation.
A Conceptual Framework for the Integration of TMS and Carbon Calculators
321
Rodrigues et al (2008) distinguish several categories for deviations among all partners in the supply chain (suppliers, customers, carriers). Additionally, uncertainties related to control systems and external uncertainties, like congestion, enforce the use of exception management and require a TMS to set adequate actions which often lead to re-routing of vehicles (Fleischmann et al, 2004). A TMS should be capable of handling exceptions properly. For example, if a shipment is late or a vehicle is off the road the TMS might respond by informing the customer by email about the situation or by re-planning the route (CapGemini Consulting, 2007). To enable the exception management function of a TMS, information from the tracking and tracing function is indispensable. All of these functions are widely used in practice whereupon the main focus of companies’ attention mainly lies in transportation planning and optimization. Data about GHG emissions caused by transportation activities is not included in TMS but can be obtained via the use of carbon calculators. The following section, therefore, provides information about transport emissions in general and available tools to esitmate them.
3 Transport Emissions and Carbon Calculators Development of transportation and economic development are closely related. Not only has the transportation sector an increasing share of employees and income, it also enables an extension in trade and an increase of competition among countries and economic regions. In the European Union, for example, the driving forces for the growth in freight transport are the integration of the market and the liberalization of the transport market itself in combination with relative low costs of freight. As a result, distances between resource extraction, manufacturing, and distribution facilities increase. McKinnon (2006) estimated each ton of freight in the UK as being on the road for 87 km in 2004, while it was only 35 km in 1953. Furthermore,
Fig. 2 Emissions per mode and tonne-km, World Economic Forum (2009)
322
Stefan Treitl, Heidrun Rosiˇc and Werner Jammernegg
modes of transport and their respective usage keep changing. Road transportation has increased while the proportion of rail and water navigation is declining steadily. Although transportation via rail or ship results in very low emissions per tonnekm compared to road (see Figure 2) the frequently required flexibility is not within reach for other modes of transport but the road (OECD, 2006). Furthermore, the high speed of air transportation compared to other modes of transport usually does not make up for the high costs and the huge amount of emissions caused by it. Nevertheless, some industries are heavily reliant on air freight transport, yet this is not the topic of this work. The most predominant greenhouse gases that have an impact on climate change are carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O). As Figure 3 indicates, CO2 is the most important greenhouse gas in the EU-27 with a share of approximately 83%. The other greenhouse gases account only for a minor share, although they have a bigger impact on climate change than CO2. As a matter of fact, it is possible to compare the impacts of other greenhouse gases to those of CO2 via so-called CO2-equivalents. This is done by the calculation of the “Global Warming Potential” of a greenhouse gas (Environmental Protection Agency, 2010). In this respect, one ton of CH4 has the same global warming potential as 25 tons of CO2, while N2O has the same global warming potential as almost 300 tons of CO2. Consequently, CO2-equivalents are calcuated as the sum of the global warming potentials of these three GHG (EcoTrans-IT World, 2008). Having said this, it is not only the amount of CO2 emissions that is relevant for this work but the amount of CO2-equivalents. In everyday speech these two terms are often wrongly used synonymously. The transport sector in the EU-27 (excluding aviation and international maritime navigation) accounted for almost 20% of all greenhouse gases in 2007 (European Environment Agency, 2009). When having a closer look at the distribution of GHG emissions among the different modes of transport (Figure 4) it can be seen that road transportation is responsible for more than 70% of the total GHG emissions in
Fig. 3 Greenhouse gas share, based on European Environment Agency (2009)
A Conceptual Framework for the Integration of TMS and Carbon Calculators
323
the EU-27 (European Environment Agency, 2009). This is due to the fact that road transportation is the most important transport mode in several industries. Especially when time plays a crucial role, order lead times are short, and only limited inventory is held in the supply chain, an industry relies almost solely on road transportation. This holds true, for example, for grocery retailing or fuel supply (McKinnon, 2006). Having said this, emissions of GHG especially from the transport sector are the focus of society’s and companies’ attention. According to a survey by Piecyk and McKinnon (2009), almost 80% of the companies involved in road freight transport saw their business as being significantly affected by climate change in the year 2020. It is indeed likely that future developments and regulations will tend to minimize the emissions generated by transportation activities. For example, the inclusion of road transportation in the EU Emission Trading Scheme (ETS) is under discussion and likely to be realized (UK Department of Transport, 2009). Furthermore, the customer’s awareness of environmental sustainability has increased tremendously in recent years. It could be concluded from several studies that 67% of UK customers prefer a product that caused lower emissions in its production, transportation, and use (Carbon Trust, 2008). This being the case, several companies, especially retailing companies, started to label their products with the amount of GHG emissions caused in the course of production, transportation and, usage. It is, however, often not clear how this figure is calculated. In order to assess the actual amount of CO2 and other greenhouse gases emitted by transportation activities, so-called carbon calculators are widely used. The main goal of a carbon calculator is to estimate the amount of GHG caused by a particular process based on several input parameters. Since the focus of this work is on transportation, only carbon calculators relating to transportation activities are considered. In fact, by feeding the program with different transport-related parameters (like mode used, vehicle used, distance, etc.) the calculator first of all estimates how much fuel will be needed. The estimated fuel consumption is then converted into greenhouse gases emitted into the atmosphere usually in the form of CO2-equivalents. However, these tools differ widely in, for example, availability
Fig. 4 GHG share by mode, based on European Environment Agency (2009)
324
Stefan Treitl, Heidrun Rosiˇc and Werner Jammernegg
(web-based vs. on-premise-installation), price (free vs. charged service), parameters included, and methodology used. Hence, also their results vary to a large extent, making a comparison between them almost impossible. For this reason we try to give a short overview of the basic functionalities and input parameters of a state-ofthe-art carbon calculator. As a vast amount of the tools available specialize in road transportation, this mode is also the focus of our work. • A state-of-the-art carbon calculator has to distinguish between different types of products transported, predominantly bulk goods and volume goods. Bulk goods are usually determined by their weight (e.g. oil or coal), whereas volume goods are determined by their volume or more precisely by the volume of their package, respectively. Depending on the type of good transported and the kind of transportation vehicle used a typical load factor based on statistical data can be determined. The load factor of a vehicle is usually defined as being the ratio of the cargo’s weight or volume and the maximum capacity of the vehicle (Barla et al, 2008). • Based on the results of a transportation planning process, the optimal transportation mode and a specific route are suggested. Ideally several nodes between different transportation networks are considered, enabling intermodality. A capacity constraint is predefined for each transport vehicle, considering maximum load weight and maximum volume available. Furthermore, the energy consumption of different transport modes and vehicles has to be taken into account (EcoTrans-IT World, 2008). • In addition, different transportation-related parameters have to be set in the calculator. In this context different road types (highway, city street, ...), gradients, and other country-specific features have to be respected (EcoTrans-IT World, 2008). The emissions of a certain transportation process are then calculated using the parameters mentioned above, under consideration of both the direct emissions from fuel combustion and the indirect emissions from the production of energy. Since theses parameters are all known before the transportation takes place we denote this information as “offline”. However, the outcome of such a calculation is only an estimation of the planned GHG emissions generated by a specific transportation process. It is obvious that, although only these parameters can be entered into the calculation tool, by far more factors can have an influence on the actual amount of GHG emissions. Especially when considering truck transportation, not only the truck type, the cargo weight and the load factor influence emissions but also driving patterns, congestion, unplanned bypasses and finally the weather conditions. These parameters are, obviously, not predictable and occur after the planning process has finished. We consider factors that occur only during the transportation process as ”online”. As a matter of fact, current carbon calculators can only deliver estimations of planned emissions on the basis of “offline” information. Yet, to determine the actual emissions also “online” information has to be taken into account. For this reason, we provide a conceptual framework where we, on the one hand, try to integrate both “offline” and “online” information to calculate the GHG emissions of a transporta-
A Conceptual Framework for the Integration of TMS and Carbon Calculators
325
tion process more precisely by combining carbon calculators with the functions of a TMS. On the other hand, we try to show in our framework, how it is possible to integrate environmental criteria in decision support.
4 A Conceptual Framework for the Integration of TMS and Carbon Calculators The main goal of this section is to provide a framework of how the results of TMS and carbon calculators can interact, thus enhancing the accuracy of emission estimates. In particular, we present a way of calculating the actual amount of GHG emissions caused by a certain transportation activity by combining the functionalities of these two software tools.
4.1 Logistics Efficiency in Existing Carbon Calculators The transportation planning function of a TMS involves, among others, the determination of an optimal route (shortest, fastest, ...), optimal vehicle and operator assignments, and vehicle loads. The optimization of logistics or, as we call it in our work, the gaining of “logistics efficiency” can be obtained by the appropriate use of TMS. We define logistics efficiency as being a set of parameters that makes a transportation process proceed efficiently. Those parameters that have an influence on logistics efficiency are, for instance, the planned route, the transport mode, or the cargo’s weight. The transportation planning function of a TMS optimizes these parameters a priori, i.e. before the physical flow of freight starts and provides information about how “logistics efficiency” can be obtained. The results of the optimization, i.e. the “offline” information, are then entered into a carbon calculator. On the basis of the parameters the carbon calculator first of all estimates fuel consumption and then converts it into emissions. It is important to notice that there is a temporal sequence. In a first step, the optimal parameters are determined by the TMS. Only afterwards in a second step the “offline” information obtained are used for estimating the GHG emissions. This temporal sequence, however, is one of the main problems when estimating carbon emissions. Due to the fact that only “offline” data is included in the calculation, the effects of unexpected events on GHG emissions during the transportation process are not considered. Therefore, the impact of, for example, congestion on the road or accidents are not included in the calculation. It can be concluded that carbon calculators that are based on the optimal results from a TMS only give a rough estimate of the planned GHG emissions of a certain transportation process. To get more accurate estimates about the actual GHG emissions some additional information has to be included.
326
Stefan Treitl, Heidrun Rosiˇc and Werner Jammernegg
4.2 The Impact of Transport Efficiency on GHG Emissions During the transportation process itself several situations may occur that can cause a transportation process to be less efficient. We define the term “transport efficiency” as being a set of parameters that enable an ongoing transportation process to proceed efficiently. In contrast to logistics efficiency, transport efficiency can not be addressed or planned a priori. Factors influencing transport efficiency are only obtained in the course of the transportation process and are, for example, congestion, accidents, or unplanned bypasses. In this respect, we define parameters affecting the efficiency of transportation as “online” information. Factors that have an influence on transport efficiency also have an impact on GHG emissions. If transport efficiency is reduced due to congestion the carbon emissions will rise. As it has already been mentioned several times, events that occur during the transportation process are not taken into consideration in existing carbon calculators. For a better understanding of our concept, consider the following example. A truck with a capacity of 28 tons has to deliver two tons of furniture from Vienna (Austria) to Salzburg (Austria). The transportation planning function of a TMS suggests a specific route (315 km) and determines several other parameters. Afterwards, the optimized parameters are entered into a carbon calculator. A first estimation yields an amount of 47.66 liters of diesel consumed and 110 kg of CO2-equivalents emitted to the atmosphere. Since the truck is able to drive on a highway most of the time, a specific average speed is considered when calculating the emissions. Unfortunately, the road taken is highly frequented and congestion occurs every now and then. The influences of congestion on GHG emissions have not been taken into account in the current calculation although they can be significant. Irregular acceleration- and brake applications can increase the amount of greenhouse gases emitted drastically, however, the consequences of congestion itself may not be neglected either. Driving with low speed or actually at walking speed increases fuel consumption and drives up emissions. Recent studies have shown that, for example, ten stops followed by acceleration can lead to an increase in fuel consumption of 130% (Volvo Trucks, 2010). One can, indeed, argue that measuring the amount of fuel needed at the destination would provide an exact figure of fuel consumption. Based on the fuel consumed a calculation of emission seems possible. However, this approach faces two big problems. First, without actual data and “online” information about the journey it is not possible to figure out reasons for the actual fuel consumption. Mostly, a decrease in transport efficiency can be seen as the reason for differences between the estimated fuel consumption and the actual consumption. But it is not directly visible if a high fuel consumption was due to congestion on the planned route or due to the driving pattern of the vehicle operator. Second, fuel consumption does not give clear evidence about all greenhouse gas emissions. Though CO2 emissions can be calculated almost perfectly based on the fuel needed, this is not the case for CH4 and N2O. The estimation of these GHG emissions is more complex because they depend on many different aspects of combustion dynamics like temperature, pressure, air-to-fuel ration, and the type of emission control system (Lipman and
A Conceptual Framework for the Integration of TMS and Carbon Calculators
327
Delucchi, 2002). Such information can not be derived from fuel consumption only. Therefore, an accurate calculation of GHG emissions in terms of CO2-equivalents must not rely exclusively on fuel consumption but must also take into account ”online” information about factors influencing transport efficiency.
4.3 Integrated Framework As it has been shown, an accurate estimation of carbon emissions cannot be achieved by using average emission data or estimating fuel consumption. Not only “offline” information but also “online” information are necessary for achieving accurate results. Therefore, we present a framework of how to gather “online” information about the actual transportation process with the help of TMS and incorporate this information into carbon calculators. By doing so, a linkage between logistics efficiency and transport efficiency can be achieved and the emission estimates become more accurate. The integration framework of TMS and carbon calculators is shown in Figure 5. By using the transportation planning function of a TMS logistics efficiency is determined. Afterwards, the GHG emissions can be calculated, though the results only represent planned emissions. Information on transport efficiency are not included in this estimation. However, “online” information can be gathered by using the tracking and tracing function of a TMS and its hard- and software environment. A very suitable technology for collecting information during the transportation process is GPS. It allows determining the driving habits of a vehicle operator and it is capable of documenting congestion or acceleration and deceleration habits. Therefore, the use of “online” information concerning factors influencing transport efficiency enables a detailed documentation of the transportation process and makes it possible to determine factors that influence the actual amount of fuel consumed. If, for instance, the main reason for a higher fuel consumption turns out to be the vehicle
Fig. 5 Integration of TMS and carbon calculators
328
Stefan Treitl, Heidrun Rosiˇc and Werner Jammernegg
operator’s driving pattern, a company could decide to encourage intensive driver education. Based on the actual fuel consumption CO2 emissions can be estimated. In order to estimate CH4 and N2O emissions precisely further, data has to be collected like the actual temperature of the engine and the ambient temperature. By analyzing “online” information like this the accuracy of CH4 and N2O emission estimates can be enhanced and a more accurate estimation of CO2-equivalents can be calculated. Furthermore, by linking the exception management function of a TMS with carbon calculators new possibilities arise. If congestion, accidents, or other unexpected situations disrupting the transportation process were reported to a TMS early enough, the exception management function of a TMS considers and initiates different actions, e.g. re-routing of the vehicle. By linking TMS with data on actual GHG emissions the possibilities of the exception management function can be enhanced. Decisions on re-routing will then not only depend on minimizing the travel time needed, but also on minimizing additional emissions. Congestion on the planned route, for instance, causes both a rise in emissions and a rise in travel time. The exception management function would then try to avoid the congestion by taking a different route. With the enhanced functionalities of exception management it is possible to compare the emissions arising when stuck in congestion and the emissions caused by taking a different route. In the future, this way of dealing with exceptions in an environmentally friendly way will gain more and more importance. It has to be mentioned that those factors influencing GHG emissions we mentioned are just exemplary and are not exhaustive. There are other factors like road resistance, tire pressure, the type and temperature of the emission control system, or the fuel mix and the use of bio-fuels that instantly affect GHG emissions of a transportation process. By including an exhaustive list of parameters and their impacts on emissions in carbon calculators and TMS it is possible to achieve a far more accurate estimation of GHG emissions caused by transportation processes.
5 Conclusions In this paper we presented a conceptual framework for integrating the functions of a TMS and carbon calculators. Carbon calculators as they exist today are based on average data and only consider a limited number of “offline” information. Furthermore, the lack of “online” data included in calculation leads to even more inaccurate estimations of GHG emissions. It could be shown that fuel consumption is not a sufficient basis for the calculation of other greenhouse gas emission than CO2. However, factors influencing transport efficiency like driving patterns or congestion can be gathered with the use of the tracking and tracing function of a TMS. We also showed how exception management functions enhanced with emission data enables the inclusion of environmental criteria in decision support. Yet, the question arises if the inclusion of “online” data provides significantly different results than existing carbon calculators do today. If this turns out to be the case, it is even more relevant to investigate companies’ incentives to estimate GHG
A Conceptual Framework for the Integration of TMS and Carbon Calculators
329
emissions more precisely for the estimation methodology we presented in this paper undoubtedly requires investments which have not been the topic of this paper.
References ARC Advisory Group (2008) Transportation Management Systems. URL http:// www.arcweb.com/Research/Studies/Pages/TMS.aspx Barla P, Bolduc D, Boucher N, Watters J (2008) Information technology and efficiency in trucking. Center for economic studies - discussion papers, Katholieke Universiteit Leuven, Centrum voor Economische Studi¨en, URL http://ideas.repec.org/p/ete/ceswps/ces0813.html CapGemini Consulting (2007) Transportation Report 2007. Tech. rep., CapGemini Carbon Trust (2008) Product carbon footprinting: the new business opportunity. Tech. rep., Carbon Trust, URL http://www.carbon-label.com/casestudies/ Opportunity.pdf Chopra S, Meindl P (2004) Supply Chain Management - Strategy, Planning and Operation, 2nd edn. Pearson Education, New Jersey Crainic T, Laporte G (1997) Planning models for freight transportation. European Journal of Operational Research 97(3):409–438 EcoTrans-IT World (2008) Methodology and Data. 1st Draft Report, IFEU Heidel¨ Institut, IVE/RMCON berg, Oko Environmental Protection Agency (2010) Global Warming Potential. URL http://www.epa.gov/climatechange/glossary.html#C European Environment Agency (2009) Greenhouse Gas Emission Trends and Projections in Europe 2009. URL http://www.eea.europa.eu/publications/ eea report 2009 9/at download/file Fleischmann B (2005) Distribution and Transport Planning. In: Statdler H, Kilger C (eds) Supply Chain Management and Advanced Planning, 3rd edn, Springer, Berlin, Heidelberg Fleischmann B, Gnutzmann S, Sandvoss E (2004) Dynamic Vehicle Routing based on Online Traffic Information. Transportation Science 38(4):420–433 Gartner (2010) Gartner Glossary. URL http://www.gartner.com/6 help/glossary/ GlossaryT.jsp G¨unther H, Seiler T (2008) Transportation Planning in Consumer Goods Supply Chains. In: Lo H, Leung S, Tam S (eds) Transportation and Management Science, Hong Kong Society for Transportation Studies Helo P, Szekely B (2005) Logistics Information Systems: An Analysis of Software Solutions for Supply Chain Co-Ordination. Industrial Management + Data Systems 105(1):5–18 Lipman TE, Delucchi MA (2002) Emissions of nitrous oxide and methane from conventional and alternative fuel motor vehicles. Climatic Change 53(4):477–516
330
Stefan Treitl, Heidrun Rosiˇc and Werner Jammernegg
McKinnon A (2006) Life without trucks: The impact of a temporary disruption of road freight transport on a national economy. Journal of Business Logistics 27(2):227–250 OECD (2006) Decoupling the Environmental Impacts of Transport from Economic Growth. URL http://www.oecd.org/dataoecd/3/52/37722729.pdf Partyka J, Hall R (2010) On the road to connectivity. OR/MS Today 37(1):42–49 Piecyk M, McKinnon A (2009) Forecasting the carbon footprint of road freight transport in 2010. International Journal of Production Economics, in press Rodrigues V, Stantchev D, Potter A, Mohamed N, Whitening A (2008) Establishing a Transport Operation Focused Uncertainty Model for the Supply Chain. International Journal of Physical Distribution and Logistics Management 38(5):388–411 Stefansson G, Tilanus B (2000) Tracking and tracing: Principles and practice. International Journal of Technology Management 20(3/4):252–271 UK Department of Transport (2009) Road Transport and the EU Emission Trading Scheme. Tech. rep., UK Department of Transport, UK, URL http://www.dft. gov.uk/adobepdf/165252/euemistraschemepdf Volvo Trucks (2010) How to optimise fuel consumption. URL http://www. volvotrucks.com/trucks/south-africa-market/en-za/aboutus/Environment/Pages/ fuel consumption.aspx World Economic Forum (2009) Supply Chain Decarbonization. URL https:// microsite.accenture.com/sustainability/Pages/supply chain decarbonization.aspx
A Conceptual Framework for the Analysis of Supply Chain Risk Monika Weish¨aupl and Werner Jammernegg
Abstract In recent years, firms have paid more and more attention to their exposure to disruptive events with rare incidence and high impact, also known as disruptions. We propose a detailed conceptual framework for analyzing risks proactively. This preparedness then gives a firm the ability to respond to the occurred disruption by reducing its effect more quickly. Moreover, the framework can be applied after a disruption has occurred supporting event management in a reactive way. It helps to adapt to new and unforeseeable conditions in a faster way.
1 Introduction In the last decade, the awareness that supply chains are exposed to a huge variety of risks existing in all possible stages has increased. Especially, risks with high impact and low probability resulting in the stoppage of goods’ flow, also known as disruption risks, pose an enormous challenge to supply chains. Risk itself can be seen in several ways. In managerial perception positive outcomes of uncertainty are generally not treated as an important factor of risk, only the negative ones (March and Shapira, 1987). Consequently, a positive deviation is seen as a “chance”. Thus, risk can be broadly seen as the possibility of danger, loss, injury, damage, or any other undesired consequence (Harland et al, 2003). Kleindorfer and Saad (2005) distinguish between “classical” typical supply/demand coordination risks and disruption risks.
Monika Weish¨aupl (B) and Werner Jammernegg WU Vienna University of Economics and Business, Nordbergstr. 15, 1090 Wien, Austria, e-mail:
[email protected] Werner Jammernegg e-mail:
[email protected] G. Reiner (ed.), Rapid Modelling and Quick Response, c Springer-Verlag London Limited 2010 DOI 10.1007/978-1-84996-525-5 23,
331
332
Monika Weish¨aupl and Werner Jammernegg
As risk can be interpreted differently this is also valid for defining supply chain risk management. However, we can state that supply chain risk management is the efficient and effective handling of supply chain risks. The fundamental idea behind it is to assure profitability and continuity of business. This can be guaranteed through collaboration and coordination among the supply chain members (Tang, 2006a). Chapman et al (2002) and J¨uttner (2005) propose a narrower view on supply chain risk management which is taken as basis for our work. They see supply chain risk management as a managerial activity aiming at the identification and management of risks arising in supply chains, within or external to the supply chain, by the use of a coordinated approach among supply chain partners, whereby the vulnerability of a supply chain as a whole should be reduced. Therefore, supply chain risk management comprises, on the one hand, the risk analysis including identification, classification, and assessment of risk and, on the other hand, the mitigation of risks which means either to reduce the probability of the events leading to a disruption and/or to reduce the consequences of the event. A well known example for a disruptive event is the fire at a Phillips plant in 2000. A lightning struck an industrial building and sparked a fire. The production had to be shut down and it took three weeks until the whole capacity could be built up again. The production site supplied parts for Ericsson and Nokia who had to face delayed shipments. Nokia was able to handle the disruption better than Ericsson as it acted quickly due to preparedness and the impact of the disruption could be reduced. Finally, Nokia could gain a competitive advantage (Norrman and Jansson, 2004; Sheffi, 2005). Disruptions usually lead to longer lead times. Long lead times have negative effects on firm’s performance. Sodhi and Tang (2009) provide a simple supply chain risk management approach showing the effect of disruption preparedness and the beneficial effect through quick response. Established recovery plans decrease the response time in case of a disruption. Accordingly, a quick recovery can be achieved by lowering the disruption’s impact. As a conclusion, Sodhi and Tang (2009) state that further research is needed in this area. Their study motivates us to create a comprehensive framework for supply chain risk management which decreases the response time in a proactive, prepared, way and although, in a reactive, event-related way, to adjust to unplanned situations quickly. The remainder of the paper is organized as follows. The state of the art of current supply chain risk management frameworks and the overall idea of the proposed framework are described in Section 2. In Section 3 we present a detailed description of the framework and its cycles. Moreover, we discuss a case study illustrating the practicability and the key features of the framework with respect to responsiveness. The case focuses on the shipment of a steel commodity by using inland waterways in Central Europe. Several items of the commodity can be carried by one barge. If the inland waterway is blocked it is possible to transport the items by train. Air cargo and trucking are not appropriate due to specific characteristics of the item. Finally, we give concluding remarks in Section 4.
A Conceptual Framework for the Analysis of Supply Chain Risk
333
2 Supply Chain Risk Management Framework This section provides an overview of current supply chain risk management frameworks. Furthermore, it presents the general ideas of the innovative approach which supports responsiveness in case of disruption in a better way.
2.1 Current Approaches The risk management process can be performed in several ways. In general, it is split at least into three stages (identification, assessment, and mitigation). It usually starts with the risk identification and ends with risk mitigation. This indicates a proactive way of execution and an implementation of recovery plans. For instance, supply chain risk management approaches in this manner are illustrated by Hallikas et al (2004), Sodhi and Tang (2009), and Ziegenbein (2007). Several supply chain risk management approaches have feedback loops. Normally just one feedback loop exists like in the approach of Harland et al (2003) or Norrman and Jansson (2004). This means that the whole process has to be restarted from the beginning, i.e. after risk mitigation risk identification follows. An approach which has two separated loops (cycles) is the one of Christopher (2003). The stages are used the same way as described above. However, these are differentiated by splitting them into a tactical and an operational cycle. This characteristic supports responsiveness due to the fact that it has not to be started from risk identification which is the beginning. Another distinctive characteristic of the approach of Christopher (2003) is that it relies on on the six sigma process DMAIC (Define, Measure, Analyze, Improve, Control). The DMAIC cycle is adjusted by changing the “Define” to “Identify” and “Improve” to “Reduce”. Relating the research areas risk management and quality management programs like total quality management (TQM) and six sigma is already supported by some researchers such as Lee and Whang (2005) and Tang (2006b). For instance, quality planning being a part of quality management is often connected to risk management. Quality planning relies on preventive thinking which is a proactive perspective in the research area of risk management.
2.2 Integrated Approach We support the idea of applying quality management in risk management. In addition to the proactive, preventive thinking we add a reactive, event-related perspective to handle changes. Quality control being another part of quality management represents this perspective which is now related to risk management. Based on the approach of Christopher (2003) we develop an innovative approach by integrating time horizons and all business views, i.e. strategic (long-term), tactical (mid-term),
334
Monika Weish¨aupl and Werner Jammernegg
and operational (short-term), and thus providing a better responsiveness by adding additional reversal possibilities (feedback loops). As mentioned above, Christopher (2003) splits the stages of supply chain risk management into two cycles. One of the two cycles is at the tactical and the other one at the operational level. As the strategic component is missing we add this level in our framework. This implies that we attach another cycle on the strategic level. Further, we include a fourth cycle between the operational and tactical level to build an interface between design and realizability of risk mitigation strategies. All these levels are in line with the business views. Thereby, we embed time horizons into the framework. Figure 1 gives an overview of the developed supply chain risk management framework and its four cycles, namely Definition & Description, Risk Analysis, Risk Evaluation, and Action. The closed loop thinking is represented by the overlapping of the cycles with each other, at least with one other cycle. The partially overlapping of the cycles provides the possibility to go back if something has to be rethought or the findings have to be fitted to a new environmental situation.
Fig. 1 Cycles of the supply chain risk management framework
A Conceptual Framework for the Analysis of Supply Chain Risk
335
In the framework there exist two main starting points. One main starting point is located in the topmost cycle for proactive supply chain risk management. Key drivers usually enforce management to execute proactive supply chain risk management. Examples of key drivers are regulatory compliance, employee and health safety, corporate image, or cost reduction (Kleindorfer and Saad, 2005). Besides, continuous revisions of the findings are initiated by the influence of key drivers. The other main starting point can be found in the bottom cycle. This can trigger a revision if a disruptive event occurs or a set contingency action does not work out in the planned way which implies reactive, event-related, supply chain risk management. The continuous revision of the findings does not have to start at the topmost cycle. A rechecking of the current findings can be done within every cycle. However, we recommend, at least, the restarting of every cycle according to the business level’s time horizon. This means, for instance, that the Definition & Description cycle being positioned at the strategic level should be restarted at least every one to two years. The cycle on the tactical level should have an update of findings every 6-12 months and so on. Therefore, the time horizons also support the continuous characteristic of the framework.
3 Supply Chain Risk Management Cycles In this section we give a detailed explanation of the supply chain risk management framework’s cycles. Besides, the main issues and the most important findings of the illustrative case are related to the framework. The case is based on a proactive analysis as the framework is carried out the first time. Every cycle has three steps of analysis. At least one of the steps belongs to two cycles. These steps can be denoted as “key steps” because they support possible feedback loops. All steps and execution ways of the framework are shown in Figure 2. The full lines show the way if it is started from the topmost cycle. The dotted lines indicate possible feedback loops and the reactive way of the framework, respectively. The two main starting points are now denoted as starting steps and highlighted visually. As several steps belong to two cycles, it is explained at its first appearance from a proactive view. Due to paper limitations we just present the case on the cycle’s level and emphasize the aspect of responsiveness. Therefore, we suppress information with respect to detailed findings of the specific steps as this information will not provide further insights. However, we present a more detailed description of the steps without case findings. Applicable methods at every step are not explained as they are just a tool and do not represent the idea of the framework. Concerning the methods and for a more detailed description of the case we refer to Weish¨aupl (2010).
336
Monika Weish¨aupl and Werner Jammernegg
Fig. 2 Supply chain risk management framework including the steps of analysis
A Conceptual Framework for the Analysis of Supply Chain Risk
337
3.1 Definition & Description Cycle The aim of the Definition & Description cycle is to define the range and the objectives of the analysis as well as to describe the parts under consideration and the according risks. The cycle is placed on the strategic level. This can be reasoned by the long-term effects of the decisions. Senior management, making strategic decisions such as the definition of firm’s objectives, has a direct or indirect influence on the risk profile and handling of risk (Christopher, 2005). Additionally, it is important to generate a common understanding of basic issues, e.g. the definition of risk, due to the different views within global supply chains (Kaj¨uter, 2003). This cycle has three steps, namely Planning & Selection, Description, and Risk Identification. The Planning & Selection is one possible starting step if proactive supply chain risk management is conducted. The last step, Risk Identification, overlaps with the Risk Analysis cycle, as it is a description of the risks and also a part of the Risk Analysis.
Planning & Selection The Planning & Selection step is one of the two possible starting steps and defines the scope of analysis as well as its objectives. Thus, it has to be decided which parts of the network, at which level of detail are considered. For this decision the assignment of geographical, temporal and related parameters like products, fixed and variable costs, and revenues is useful. For this step, it is crucial to decide who conducts the analysis, i.e. if it is done by a third party or internally. By examing if the set objectives are reached or not the vast majority use one indicator, which can be a mixture of several, or multiple key performance indicators. These key performance indicators have to be defined in advance. This provides knowledge of how the findings should be measured. In fact, this is very important for the evaluation of possible contingency actions. Furthermore, potential risk acceptance criteria are determined. That means criteria have to be defined if the assessed risk is accepted and born respectively or mitigation actions have to be set (Asbjørnslett, 2008). The risk acceptance criteria themselves are applied in a later step.
Description The target of the Description step is to map and describe the selected supply chain. The parameters mentioned above which narrowed the scope of the analysis, should be sketched down. It is essential to clarify the role and responsibility of organizations including property information and persons within the network as well (Harland et al, 2003). The collection of already known key performance indicators is another crucial task. In addition to that, the most important information of supply chain partners such as key suppliers and customers, should be picked up. The
338
Monika Weish¨aupl and Werner Jammernegg
Description step is never used as a starting step within the cycle as it has “just” a reporting characteristic.
Risk Identification The aim of Risk Identification is to determine all possible risks which can influence the organization negatively in reaching its aims. It is worth noting that not just the risks themselves but also the risk drivers have to be identified. Risk drivers are factors having a significant impact on the risk exposure like consequence and likelihood (Ritchie and Brindley, 2007). In the last decade several trends like centralization and globalization have led to higher risk. Therefore, these trends can be seen as risk drivers. The Risk Identification step is fundamental for the whole analysis as it builds the basis and the first step of the Risk Analysis cycle. It is a key step as the findings of the Risk Analysis cycle can influence the Risk Identification and also the ideas of the Planning & Selection step.
Case Illustration Applying the issues of the Definition & Description cycle to our case gives the following results. The Definition & Description cycle determines the commodity under consideration, which are steel coils, and the geographical region being Central Europe inland waterways. Lead time is declared as a main key performance indicator to measure the mitigation of disruptions. The objective is to minimize lead time in case of a disruption. Currently, no defined plans of execution in an automatic way exist. Thus, basic concepts to establish supply chain event management software should be developed. Moreover, at this cycle all involved supply chain parties are identified and their tasks are described. Based on the risk identification, a risk catalog summarizes all identified risks.
3.2 Risk Analysis Cycle The Risk Analysis cycle is essential as its results build the basis for creating strategies, i.e. reduction of the likelihood and handling of the consequences, to counteract specific, usually highest ranked, risk. On that account, the different sources of risk have to be identified, the likelihood of a potential risk has to be assessed and the consequences have to be considered. The cycle is mainly positioned on the tactical level but it also belongs to the strategic level. This can be reasoned by the partly long-term and mainly mid-term effects of the findings. The Definition & Description cycle and the Risk Analysis cycle have one step in common, Risk Identification. This step is the key for returning from the second
A Conceptual Framework for the Analysis of Supply Chain Risk
339
to the first cycle of the proactive view. It takes place in both cycles due to the fact that it interlinks the description of risk and is the basis for their assessment. The Risk Assessment step is clearly positioned in the Risk Analysis as it is the heart of it. The remaining step Risk Handling is a key step as it belongs to two cycles, Risk Analysis and Risk Evaluation. The step Risk Identification has already been explained in Section 3.1.
Risk Assessment This step comprises risk quantification and measurement. The aim of the Risk Assessment step is to evaluate risks according to several characteristics. These are mostly likelihood and consequence of the disruptive event and risk, respectively. The consequence is usually valuated in customer service and/or cost terms. But also the probability of impact avoidance and detection as well as network exposure are possible characteristics (Christopher, 2003; Harland et al, 2003). Risk Assessment is conducted on the basis of expert judgments and subjective probabilities or by using frequency data. Haimes (2009) generalized an additional task of Risk Assessment besides risk quantification which is the modeling of the causal relationships among the risks and risk drivers.
Risk Handling Risk Handling links the process of risk assessment with risk mitigation. The output of this step is the ranking of the identified and assessed risks. The risks are related to various elements and characteristics being the same as at the Risk Assessment step and/or different ones. The purpose of Risk Handling is to make a criticality ranking and to specify the crucial risks which should be analyzed in more details in the Risk Evaluation cycle. With respect to this decision risk acceptance criteria defined at the step Planning & Selection can be applied. As the Risk Handling step is the linkage between the two cycles Risk Analysis and Risk Evaluation a reranking is possible if new insights have been gained by the Risk Evaluation.
Case Illustration With respect to our case the Risk Analysis cycle identifies risks and assesses the risks’ factors probability and consequences by working with expert knowledge of all involved parties. Therefore, the risk factors are assessed in a subjective way. The probability scale is from 1 to 10 and the consequence factor classes are minor, moderate, and major. The cycle ends with risk maps of the different parties and a risk profile of the whole supply chain. The highest ranked risks on the supply chain level arise from the category weather being low water and high water. If we see that
340
Monika Weish¨aupl and Werner Jammernegg
other risks have been identified during the assessment we can update the risk catalog because of the existing feedback loop.
3.3 Risk Evaluation Cycle The Risk Evaluation incorporates the further handling of the assessed and ranked risks. General possible mitigation strategies are formulated and then finalized in a realizable Action Plan incorporating the execution and/or cost perspective. This cycle connects the tactical and operational thinking of risk management as the execution of the related decisions have mid- to short-term effects. Like the aforementioned cycles the Risk Evaluation cycle has three steps: Risk Handling, Contingency Actions and Action Plan. The former step, Risk Handling, has already been explained in Section 3.2. The second step represents the center of the cycle as general mitigation strategies are developed. The Action Plan step is a key step. It links the Risk Evaluation cycle with the Action cycle.
Contingency Actions The Contingency Actions step aims at the identification of mitigation strategies, i.e. ways to lower the risks themselves or action alternatives if the negative event occurs. At the following step Action Plan these mitigation strategies are evaluated with respect to their realizability. A mitigation strategy presents the general idea of how to lower the risk or the impact and the mitigation tactic is a detailed description of how it is carried out. For instance, the mitigation strategy is demand management and the related mitigation tactic is to provide substitute products. Both can be summarized under the term Contingency Actions. A further issue of this step is to record already existing Contingency Actions.
Action Plan The Action Plan step deals with the creation and evaluation of realizable Contingency Actions. Therefore, two perspectives, execution and costs, are added to the Contingency Actions. The execution perspective represents detailed information about who is doing what, when, and which information has to be available to be able to perform the Action Plan. The costs perspective deals with the payoff of costs and benefit. At least one of the two perspectives should be taken into consideration. An important point, which has to be kept in mind while performing the Action Plan, is that the arising costs for the proactively set Contingency Actions can be seen as “insurance premiums”. Findings of this step can have an influence on the ranking of the risks. Consequently, the back loop of this step to Risk Handling is crucial to be able to rerank
A Conceptual Framework for the Analysis of Supply Chain Risk
341
the risks regarding their “new” assessed characteristics including the execution and costs perspective. Further, it is a key step as this step also belongs to the Action cycle.
Case Illustration Relating the Risk Evaluation cycle to our case we see, that this cycle deals with the risk mitigation of the highest ranked risks which are low water and high water. The identification of current handling of the risks by asking the parties is one point. In addition to that, other mitigation strategies are considered like possible ways of rerouting or the number of shipped items and used barges. In this case study event-driven process chains visualize the detailed handling of the risks by adding the execution perspective. Thus, besides the way of action, the concerned parties, the needed information, and data are described. If the risks with respect to their consequences have been evaluated in an incorrect way, we have now the possibility to go back to the Risk Analysis cycle and rerank the risks.
3.4 Action Cycle The Action cycle deals with the implementation and control of the selected Action Plan and monitors arising disruptions. The positioning on the operational level indicates the executing, short-term, character of the cycle. Besides, this feature highlights the reactive, event-related, style of the cycle. If an undesired event like a disruption occurs and cannot be handled appropriately the cycle may start and may go “up” to the Risk Evaluation cycle or further. The possible starting step Control indicates this characteristic. Like all cycles before it consists of three steps. These are Action Plan, Implementation, and Control. The step Action Plan has already been explained in Section 3.3 as it is the key step between the Risk Evaluation and Action cycle.
Implementation The Implementation step aims at the real implementation of the determined Action Plan. The realization of these facts is not always necessary. First, it has to be determined if something has to be adapted or established to carry them out. An Action Plan may be just written down and communicated to the involved members. So that they know what to do if a disruption occurs. If something new has to be established for the ability to carry out the Action Plan, it is also done at this step. For instance, agreements between supply chain members are assigned or supply network strategies are implemented like Harland et al (2003) demonstrates.
342
Monika Weish¨aupl and Werner Jammernegg
Control The task of the Control step is twofold. On the one hand, it monitors the correct implementation of the Action Plan. Moreover, it supervises if the execution of the Actions Plan is possible all the time. On the other hand, this step is in charge of identifying the occurrence of undesirable events and disruptions, respectively as soon as possible. If it is known, an already determined Actions Plan is used. If it is unknown and untreated so far, the Control step is a starting step of the framework. Therefore, it can be seen as the last step of the proactive or the starting step of the reactive supply chain risk management. If reactive supply chain risk management has to be performed, the loop goes up to the Action Plan step.
Case Illustration According to our case the Action cycle concentrates on the realization of the defined ways of handling the disruption. The different plans which are visualized by the event-driven process chains are communicated. The defined actions are compared with the actual state of the art and adjusted if necessary. This means that a supply chain event management software is implemented to execute the defined handling partially in an automatic way. If the implementation does not work as planned or the environmental situation changes, we can adapt the defined ways of handling the disruption and the corresponding event-driven process chains due to the existing feedback loop. Finally, we give a short conceptional presentation of how the framework supports responsiveness, if a disruption hampers the supply chain, based on the illustrative case. We see that by carrying out the last cycle, the responsiveness to disruptions is increased as a quicker handling can be performed. If a risk and its disruption occurs, the defined way of handling can be carried out without further considerations. The event-driven process chains and the supply chain event management software support this. If a so far not considered situation arises, a reactive execution of the cycles is possible. Again, the fast execution is given, as not the whole framework has to be restarted, i.e. the key steps being located at the cycles’ overlaps are points which indicate possible changes in the way of execution. For instance, the application of the reactive framework means that an unknown or so far unhandled risk occurs. This means the main starting point is situated at the Action cycle, more precisely at the Control step. Now we have to go up to the Risk Evaluation cycle and the Action Plan step to design a strategy to handle the risk. In addition, we have to update the risk profile due to the fact that the ranking of the risks may change (Risk Handling step). Now two ways of executing the cycles exist. First, if the risk is already in the ranking we can continue the proactive way and go back to the Control step. Second, if the risk is not listed in the risk profile we also have to revise the risk catalog which is done at the step Risk Identification. This revision of the risk catalog may lead to an adjustment of the scope of the work
A Conceptual Framework for the Analysis of Supply Chain Risk
343
if it is not appropriate. So we end at the first step of the proactive way, the Planning & Selection step. The result is that the whole framework including the Risk Analysis cycle and the Definition & Description cycle is updated in a reactive way due to new findings. However, the framework changes its way of execution as soon as possible due to the existing key steps.
4 Concluding Remarks This paper contributes to the beneficial effect of responsiveness by disruption preparedness. We show that with the proposed conceptual supply chain risk management framework the responsiveness after an occurred disruption can be improved. Our provided framework is progressive as it works in a proactive way to be prepared for a disruption as well as in a reactive way to be able to adapt quickly after a disruption has occurred. The cyclical thinking within all processes of the framework is motivated by the idea of six sigma. Four cycles (Definition & Description, Risk Analysis, Risk Evaluation, and Action) represent the main components of the framework. A distinctive feature is the overlapping of the cycles providing feedback loops. These allow us to go back if something has to be rethought or the analysis and the taken actions have to be fitted to a new environmental situation without the occurrence of a disruptive event. Another characteristic of the framework is the integration of business views (strategic, tactical, and operational). Besides the integration of management’s responsibility for specific tasks the business views and according time horizons support the continuous updating of the framework.
References Asbjørnslett BE (2008) Assessing the vulnerability of supply chains. In: Zsidisin GA, Ritchie B (eds) Supply chain risk: A handbook of assessment, management, and performance, Springer, New York, NY, pp 15–34 Chapman P, Christopher M, J¨uttner U, Peck H (2002) Identifying and managing supply chain vulnerability. Logistics & Transport Focus 4(4):59–64 Christopher M (2003) Creating resilient supply chains: A practical guide. Centre for Logistics and Supply Chain Management, Cranfield School of Management Christopher M (2005) Logistics and supply chain management. Financial Times, Prentice Hall, London Haimes YY (2009) Risk modeling, assessment, and management, 3rd edn. WileyInterscience, New York, NY Hallikas J, Karvonen I, Pulkkinen U, Virolainen VM, Tuominen M (2004) Risk management processes in supplier networks. International Journal of Production Economics 90(1):47–58
344
Monika Weish¨aupl and Werner Jammernegg
Harland C, Brenchley R, Walker H (2003) Risk in supply networks. Journal of Purchasing & Supply Management 9(5-6):51–62 J¨uttner U (2005) Supply chain risk management: Understanding the business requirements from a practitioner perspective. International Journal of Logistics Management 16(1):120–141 Kaj¨uter P (2003) Risk management in supply chains. In: Seuring S, M¨uller M, Goldbach M, Schneidewind U (eds) Strategy and organization in supply chains, Physica Verlag, pp 321–336 Kleindorfer PR, Saad GH (2005) Managing disruption risks in supply chains. Production and Operations Management 14(1):53–68 Lee HL, Whang S (2005) Higher supply chain security with lower cost: Lessons from total quality management. International Journal of Production Economics 96(3):289–300 March JG, Shapira Z (1987) Managerial perspectives on risk and risk taking. Management Science 33(11):1404–1418 Norrman A, Jansson U (2004) Ericsson’s proactive supply chain risk management approach after a serious sub-supplier accident. International Journal of Physical Distribution and Logistics Management 34(5):434–456 Ritchie B, Brindley C (2007) Supply chain risk management and performance - A guiding framework for future development. International Journal of Operations & Production Management 27(3):303–322 Sheffi Y (2005) The resilient enterprise: overcoming vulnerability for competitive advantage. MIT Press, Cambridge, Mass. Sodhi MS, Tang CS (2009) Managing supply chain disruptions via time-based risk management. In: Wu T, Blackhurst J (eds) Managing supply chain risk and vulnerability - tools and methods for supply chain decision makers, Springer, Dordrecht, pp 29–40 Tang CS (2006a) Perspectives in supply chain risk management. International Journal of Production Economics 103(2):451–488 Tang CS (2006b) Robust strategies for mitigating supply chain disruptions. International Journal of Logistics: Research and Applications 9:33–45 Weish¨aupl M (2010) A toolkit for proactive and reactive supply chain risk management. Working Paper, Department of Information Systems and Operations, WU Vienna University of Economics and Business Ziegenbein A (2007) Supply Chain Risiken - Identifikation, Bewertung und Steuerung. vdf Hochschulverlag AG, Z¨urich
Appendix A
International Scientific Board
The chair of the international scientific board of the 2nd rapid modelling conference “Rapid Modelling and Quick Response: Intersection of Theory and Practice” consisted of: • Gerald Reiner (University of Neuchˆatel, Switzerland) Members of the international scientific board as well as referees are: • • • • • • • • • • • • • • • • • • • • • •
Djamil A¨ıssani (LAMOS, University of B´ejaia, Algeria) Michel Bierlaire (EPFL, Switzerland) Cecil Bozarth (North Carolina State University, USA) B´enamar Chouaf (University of Sidi Bel Abes, Algeria) Lawrence Corbett (Victoria University of Wellington, New Zealand) Krisztina Demeter (Corvinus University of Budapest, Hungary) Suzanne de Treville (University of Lausanne, Switzerland) Barb Flynn (Indiana University, USA) Gerard Gaalman (University of Groningen, The Netherlands) Ari-Pekka Hameri (University of Lausanne, Switzerland) Petri Helo (University of Vaasa, Finland) Olli-Pekka Hilmola (Lappeenranta University of Technology, Finland) Werner Jammernegg (Vienna University of Economics and Business Administration, Austria) Matteo Kalchschmidt (University of Bergamo, Italy) Ananth Krishnamurthy (University of Wisconsin-Madison, USA) Doug Love (Aston Business School, UK) Jose Antonio Dominguez Machuca (University of Sevilla, Spain) Carolina Osorio (EPFL, Switzerland) Jeffrey S. Petty (Lancer Callon, UK) Reinhold Schodl (University of Neuchˆatel, Switzerland) Boualem Rabta (University of Neuchˆatel, Switzerland) Nico J. Vandaele (Catholic University Leuven, Belgium)
345
Appendix B
Sponsors
The sponsors of the 2nd Rapid Modelling Conference “Rapid Modelling and Quick Response: Intersection of Theory and Practice” are: • • • • • • •
Chocolats Camille Bloch http://www.camillebloch.ch LANCER CALLON http://www.lancercallon.com REHAU http://www.rehau.at SOFTSOLUTION http://www.softsolution.at SWISS OPERATIONS RESEARCH SOCIETY http://www.svor.ch THENEXOM http://www.thenexom.net University of Lausanne, Faculty of Business and Economics (HEC) http://www.hec.unil.ch • University of Neuchˆatel, Faculty of Economics http://www.unine.ch/seco
347